[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433401#comment-16433401
 ] 

Hudson commented on HDFS-13056:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13966 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13966/])
HDFS-13056. Expose file-level composite CRCs in HDFS which are (xiao: rev 
7c9cdad6d04c98db5a83e2108219bf6e6c903daf)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CrcUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/BlockChecksumType.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCrcUtil.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestCrcComposer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/Hdfs.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksumCompositeCrc.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CompositeCrcFileChecksum.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockChecksumHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/BlockChecksumOptions.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* (add) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapperCompositeCrc.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockChecksumReconstructor.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CrcComposer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockReconstructor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockChecksumCompositeCrcReconstructor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockChecksumMd5CrcReconstructor.java


> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> 

[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13056:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed patch 14 to trunk.

Thanks [~dennishuo] for the great work here! Also thanks [~wzzdreamer], 
[~ajayydv], [~arpitagarwal], [~ste...@apache.org] for the reviews and comments!

[~dennishuo]:

Please add a release note for this (by Edit the jira, and enter into the 
'Release Note' field.

It seems you wanted to target this to branch-2.8. If this is true, please feel 
free to reopen and attach branch-2 patches. Precommit will only run if the Jira 
is in Patch Available state.

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, 
> HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, 
> HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, 
> HDFS-13056.014.patch, Reference_only_zhen_PPOC_hadoop2.6.X.diff, 
> hdfs-file-composite-crc32-v1.pdf, hdfs-file-composite-crc32-v2.pdf, 
> hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology

2018-04-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13418:
-
Status: Patch Available  (was: Open)

>  NetworkTopology should be configurable when enable DFSNetworkTopology
> --
>
> Key: HDFS-13418
> URL: https://issues.apache.org/jira/browse/HDFS-13418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HDFS-13418.001.patch
>
>
> In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set 
> DFSNetworkTopology as the default implementation.
> We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in 
> core-site.default. Actually this property does not effect once 
> {{dfs.use.dfs.network.topology}} is true. 
> in {{DatanodeManager}},networkTopology is initialized as 
> {code}
> if (useDfsNetworkTopology) {
>   networktopology = DFSNetworkTopology.getInstance(conf);
> } else {
>   networktopology = NetworkTopology.getInstance(conf);
> }
> {code}
> I think we should still make the NetworkTopology  configurable rather than 
> hard code the implementation since we may need another NetworkTopology impl.
> I am not sure if there is other consideration. Any thought? [~vagarychen] 
> [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology

2018-04-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433364#comment-16433364
 ] 

Yiqun Lin commented on HDFS-13418:
--

Hi [~Tao Jie], your proposal also makes sense to me.
 I agreed with [~vagarychen]'s comments that we need complete document of this. 
And please add this new setting in hdfs-default.xml. In addition, it would be 
better to add a new UT to verify the new behavior.

>  NetworkTopology should be configurable when enable DFSNetworkTopology
> --
>
> Key: HDFS-13418
> URL: https://issues.apache.org/jira/browse/HDFS-13418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HDFS-13418.001.patch
>
>
> In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set 
> DFSNetworkTopology as the default implementation.
> We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in 
> core-site.default. Actually this property does not effect once 
> {{dfs.use.dfs.network.topology}} is true. 
> in {{DatanodeManager}},networkTopology is initialized as 
> {code}
> if (useDfsNetworkTopology) {
>   networktopology = DFSNetworkTopology.getInstance(conf);
> } else {
>   networktopology = NetworkTopology.getInstance(conf);
> }
> {code}
> I think we should still make the NetworkTopology  configurable rather than 
> hard code the implementation since we may need another NetworkTopology impl.
> I am not sure if there is other consideration. Any thought? [~vagarychen] 
> [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433357#comment-16433357
 ] 

genericqa commented on HDFS-1:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-1 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-1 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918501/HDFS-1-HDFS-7240.003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23871/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Introduce a new SCM Exception which will be thrown when mandatory 
> property is missing 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-1-HDFS-7240.001.patch, 
> HDFS-1-HDFS-7240.002.patch, HDFS-1-HDFS-7240.003.patch, 
> HDFS-1.001.patch
>
>
> It's better to have a separate SCM Exception to indicate a missing mandatory 
> property. This was proposed by [~xyao] in  [this comment| 
> https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408553=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408553]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13419) client can communicate to server even if hdfs delegation token expired

2018-04-10 Thread wangqiang.shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433353#comment-16433353
 ] 

wangqiang.shen commented on HDFS-13419:
---

I don't have this use case, but i think sink token has expire time, we should 
prevent the use of token to access services if it was expired. Is that right?

> client can communicate to server even if hdfs delegation token expired
> --
>
> Key: HDFS-13419
> URL: https://issues.apache.org/jira/browse/HDFS-13419
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangqiang.shen
>Priority: Minor
>
> i was testing hdfs delegation token expired problem use spark streaming, if i 
> set my batch interval small than 10 sec, my spark streaming program will not 
> dead, but if batch interval was setted bigger than 10 sec, the spark 
> streaming program will dead because of hdfs delegation token expire problem, 
> and the exception as follows
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 14042 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>   at com.sun.proxy.$Proxy11.getListing(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:554)
>   at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy12.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1969)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1952)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:693)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
>   at com.envisioncn.arch.App$2$1.call(App.java:120)
>   at com.envisioncn.arch.App$2$1.call(App.java:91)
>   at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
>   at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)
>   at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
>   at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
>   at org.apache.spark.scheduler.Task.run(Task.scala:86)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> the spark streaming program only call FileSystem.listStatus function in every 
> batch
> {code:java}
> FileSystem fs = FileSystem.get(new Configuration());
> FileStatus[] status =  fs.listStatus(new Path("/"));
> for(FileStatus status1 : status){
> System.out.println(status1.getPath());
> }
> {code}
> and i found when hadoop client send rpc request to server, it will first get 
> a connection object and set up the connection if the connection dose not 
> exists.And  it will get a SaslRpcClient to connect to the server side in the 
> connection setup stage.Also server will authenticate the client at the 
> connection setup stage. But if the connection exists, client 

[jira] [Commented] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-10 Thread LiXin Ge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433348#comment-16433348
 ] 

LiXin Ge commented on HDFS-1:
-

[~elek] Thanks for your review and advice, you suddenly remind me of the 
compatibility and universality that slip from my memory when I modifying the 
interface of {{ServicePlugin}}, agree with you.

For the code logic itself, an appointed exception type will be better. For the 
Ozone system, compatibility is a problem but not the unacceptable one in 
consideration of  the user number currently we have. As you mentioned, 
universality that affect all the plugin we will have in the future is the 
really problem in my opinion.

In patch 003 I revert the {{HddsDatanodeService#start}} to just throw 
RuntimeException, but still retained the specified log for different exception 
type.

> Ozone: Introduce a new SCM Exception which will be thrown when mandatory 
> property is missing 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-1-HDFS-7240.001.patch, 
> HDFS-1-HDFS-7240.002.patch, HDFS-1-HDFS-7240.003.patch, 
> HDFS-1.001.patch
>
>
> It's better to have a separate SCM Exception to indicate a missing mandatory 
> property. This was proposed by [~xyao] in  [this comment| 
> https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408553=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408553]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-10 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-1:

Attachment: HDFS-1-HDFS-7240.003.patch

> Ozone: Introduce a new SCM Exception which will be thrown when mandatory 
> property is missing 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-1-HDFS-7240.001.patch, 
> HDFS-1-HDFS-7240.002.patch, HDFS-1-HDFS-7240.003.patch, 
> HDFS-1.001.patch
>
>
> It's better to have a separate SCM Exception to indicate a missing mandatory 
> property. This was proposed by [~xyao] in  [this comment| 
> https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408553=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408553]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-10 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-1:

Status: Patch Available  (was: Open)

> Ozone: Introduce a new SCM Exception which will be thrown when mandatory 
> property is missing 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-1-HDFS-7240.001.patch, 
> HDFS-1-HDFS-7240.002.patch, HDFS-1-HDFS-7240.003.patch, 
> HDFS-1.001.patch
>
>
> It's better to have a separate SCM Exception to indicate a missing mandatory 
> property. This was proposed by [~xyao] in  [this comment| 
> https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408553=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408553]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-10 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-1:

Attachment: (was: HDFS-1-HDFS-7240.003.patch)

> Ozone: Introduce a new SCM Exception which will be thrown when mandatory 
> property is missing 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-1-HDFS-7240.001.patch, 
> HDFS-1-HDFS-7240.002.patch, HDFS-1.001.patch
>
>
> It's better to have a separate SCM Exception to indicate a missing mandatory 
> property. This was proposed by [~xyao] in  [this comment| 
> https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408553=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408553]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-10 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-1:

Status: Open  (was: Patch Available)

> Ozone: Introduce a new SCM Exception which will be thrown when mandatory 
> property is missing 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-1-HDFS-7240.001.patch, 
> HDFS-1-HDFS-7240.002.patch, HDFS-1.001.patch
>
>
> It's better to have a separate SCM Exception to indicate a missing mandatory 
> property. This was proposed by [~xyao] in  [this comment| 
> https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408553=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408553]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-10 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-1:

Attachment: HDFS-1-HDFS-7240.003.patch

> Ozone: Introduce a new SCM Exception which will be thrown when mandatory 
> property is missing 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-1-HDFS-7240.001.patch, 
> HDFS-1-HDFS-7240.002.patch, HDFS-1.001.patch
>
>
> It's better to have a separate SCM Exception to indicate a missing mandatory 
> property. This was proposed by [~xyao] in  [this comment| 
> https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408553=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408553]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-04-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433344#comment-16433344
 ] 

Yiqun Lin commented on HDFS-12615:
--

It looks amazing that we have fixed and made so many improvements for RBF. 
Thanks all the contributors!
 Just revisiting the previous plan of RBF phase 2 and ensuring we are in the 
same direction:
{quote}At this point the only critical task missing is enabling security 
(mainly HDFS-12284), Sherwood Zheng has been doing progress there and we are 
only missing the delegation tokens.
 The other tasks are mostly done we should try to complete them, the most 
important in my opinion:
 * HDFS-12773 to do a proper HDFS State Store.
 * HDFS-12792 to have complete unit tests.

Then there are a bunch of open topics:
 # Rebalancer of data across subclusters. We have some internal prototype based 
on DistCp that I shared with Wei Yan (we could also include moving blocks 
across blockpools). We could start a separate umbrella for this.
 # DNs interacting with the Routers. I recently opened HDFS-13098 to discuss 
this but this could.
 # Spreading a mount point across multiple mount points. This is similar to 
merge mount points and we have most of it working in production.{quote}
The #3 (*Spreading a mount point across multiple mount points*) mentioned in 
open topics has been implemented now (the doc of this (HDFS-13237) will be 
merged soon ). HDFS-12773 and HDFS-12792 are both merged.

Now there are two major works in phase 2:
 * RBF security tasks: HDFS-12284 and HDFS-13358.
 * DBMS State Store implementation: HDFS-13245.

[~elgoiri], I have some questions in my mind:
 * Besides this, any other thing we need to implement?
 * What's the next plan in RBF phase2?
 * What's the plan of RBF Rebalancer? It looks a useful tool for users.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13245) RBF: State store DBMS implementation

2018-04-10 Thread Yiran Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1644#comment-1644
 ] 

Yiran Wu commented on HDFS-13245:
-

I am implementing this, support state store uses Database as a backend. (mysql, 
sqlserver, hsqldb)


> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-10 Thread Jinglun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433325#comment-16433325
 ] 

Jinglun commented on HDFS-13388:


Thanks [~elgoiri]. Keeping the patch in trunk would be enough for me. :)

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12950) [oiv] ls will fail in secure cluster

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433254#comment-16433254
 ] 

genericqa commented on HDFS-12950:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-12950 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918461/HDFS-12950.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3954e3dabe62 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e813975 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23870/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23870/testReport/ |
| Max. process+thread count | 3467 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-13414) Ozone: Update existing Ozone documentation according to the recent changes

2018-04-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433184#comment-16433184
 ] 

Xiaoyu Yao commented on HDFS-13414:
---

Thanks [~elek] for working on this. The patch looks good to me overall. I just 
have one question regarding the http port change from 9864->9880 for ozone rest 
service.

 

Do we need to change the port (9864) below in docker-compose file section for 
datanode to get access to ozone http server?

{code}

datanode:
image: apache/hadoop-runner
volumes:
- ${HADOOPDIR}:/opt/hadoop
ports:
- 9864
command: ["/opt/hadoop/bin/ozone","datanode"]
env_file:
- ./docker-config

{code}

 

 

> Ozone: Update existing Ozone documentation according to the recent changes
> --
>
> Key: HDFS-13414
> URL: https://issues.apache.org/jira/browse/HDFS-13414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13414-HDFS-7240.001.patch
>
>
> 1. Datanode port has been changed
> 2. remove the references to the branch (prepare to merge)
> 3. CLI commands are changed (eg. ozone scm)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-10 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433151#comment-16433151
 ] 

Plamen Jeliazkov edited comment on HDFS-13399 at 4/10/18 11:24 PM:
---

Hey [~xkrogen], thanks for the review! Here's what I found:

(1) I am not sure if that Javadoc link error is related to my changes. I did 
not modify {{NameNodeProxies#createProxy(Configuration, URI, Class)}} and with 
my changes shelved / removed I still see the Javadoc link error on 
{{NameNodeProxiesClient#createProxyWithClientProtocol}}. Unless you are 
referring to something else I am not sure what I missed...

(2) I agree they appear related. Took a quick look. It seems 
{{Client.createCall(RPC.RpcKind rpcKind, Writable rpcRequest)}} is now unused 
in favor of the overloaded method that adds {{AlignmentContext}} as a 
parameter. Should I just delete {{Client.createCall(RPC.RpcKind rpcKind, 
Writable rpcRequest)}} or should I modify TestIPC and TestAsyncIPC to call the 
new method? I would vote to remove the two parameter method and update the 
tests as it really seems it is just unused now.

(3) I will apply your changes in a next patch; I also have made some test 
optimizations on my own as well – mostly around removing unneeded 
{{cluster.waitActive()}} calls.

As for the configuration question – it was more-so just out of curiosity to see 
if you guys wanted this to just be a standard feature of {{DFSClients}} and 
{{FSNamesystem}} going forward or if you wanted to try to preserve the legacy 
behavior. I agree overhead is very minimal; I would say in both processing and 
payload. I can only think it may be useful to revert to "old behavior" in the 
event that there is ever some crippling bug in the client or server side of the 
RPC processing. I would vote to make it a standard feature, but again, I wanted 
to ask and see your opinions.


was (Author: zero45):
(1) I am not sure if that Javadoc link error is related to my changes. I did 
not modify {{NameNodeProxies#createProxy(Configuration, URI, Class)}} and with 
my changes shelved / removed I still see the Javadoc link error on 
{{NameNodeProxiesClient#createProxyWithClientProtocol}}. Unless you are 
referring to something else I am not sure what I missed...

(2) I agree they appear related. Took a quick look. It seems 
{{Client.createCall(RPC.RpcKind rpcKind, Writable rpcRequest)}} is now unused 
in favor of the overloaded method that adds {{AlignmentContext}} as a 
parameter. Should I just delete {{Client.createCall(RPC.RpcKind rpcKind, 
Writable rpcRequest)}} or should I modify TestIPC and TestAsyncIPC to call the 
new method? I would vote to remove the two parameter method and update the 
tests as it really seems it is just unused now.

(3) I will apply your changes in a next patch; I also have made some test 
optimizations on my own as well – mostly around removing unneeded 
{{cluster.waitActive()}} calls.

As for the configuration question – it was more-so just out of curiosity to see 
if you guys wanted this to just be a standard feature of {{DFSClients}} and 
{{FSNamesystem}} going forward or if you wanted to try to preserve the legacy 
behavior. I agree overhead is very minimal; I would say in both processing and 
payload. I can only think it may be useful to revert to "old behavior" in the 
event that there is ever some crippling bug in the client or server side of the 
RPC processing. I would vote to make it a standard feature, but again, I wanted 
to ask and see your opinions.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-10 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433151#comment-16433151
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

(1) I am not sure if that Javadoc link error is related to my changes. I did 
not modify {{NameNodeProxies#createProxy(Configuration, URI, Class)}} and with 
my changes shelved / removed I still see the Javadoc link error on 
{{NameNodeProxiesClient#createProxyWithClientProtocol}}. Unless you are 
referring to something else I am not sure what I missed...

(2) I agree they appear related. Took a quick look. It seems 
{{Client.createCall(RPC.RpcKind rpcKind, Writable rpcRequest)}} is now unused 
in favor of the overloaded method that adds {{AlignmentContext}} as a 
parameter. Should I just delete {{Client.createCall(RPC.RpcKind rpcKind, 
Writable rpcRequest)}} or should I modify TestIPC and TestAsyncIPC to call the 
new method? I would vote to remove the two parameter method and update the 
tests as it really seems it is just unused now.

(3) I will apply your changes in a next patch; I also have made some test 
optimizations on my own as well – mostly around removing unneeded 
{{cluster.waitActive()}} calls.

As for the configuration question – it was more-so just out of curiosity to see 
if you guys wanted this to just be a standard feature of {{DFSClients}} and 
{{FSNamesystem}} going forward or if you wanted to try to preserve the legacy 
behavior. I agree overhead is very minimal; I would say in both processing and 
payload. I can only think it may be useful to revert to "old behavior" in the 
event that there is ever some crippling bug in the client or server side of the 
RPC processing. I would vote to make it a standard feature, but again, I wanted 
to ask and see your opinions.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13423) Ozone: Clean-up of ozone related change from hadoop-hdfs-project

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433149#comment-16433149
 ] 

genericqa commented on HDFS-13423:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} integration-test in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | 

[jira] [Commented] (HDFS-12950) [oiv] ls will fail in secure cluster

2018-04-10 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433119#comment-16433119
 ] 

Ajay Kumar commented on HDFS-12950:
---

[~jojochuang], thanks for posting the patch. LGTM. Two minor comments:
* For TestOfflineImageViewer#testWebImageViewerSecureMode we should test 
specific exception with error message using LambdaTestUtils#intercept or 
GenericTestUtils#assertExceptionContains
* Update the javadoc for WebImageViewer#start


> [oiv] ls will fail in  secure cluster
> -
>
> Key: HDFS-12950
> URL: https://issues.apache.org/jira/browse/HDFS-12950
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-12950.001.patch
>
>
> if we execute ls, it will throw following.
> {noformat}
> hdfs dfs -ls webhdfs://127.0.0.1:5978/
> ls: Invalid value for webhdfs parameter "op"
> {noformat}
> When client is configured with security (i.e "hadoop.security.authentication= 
> KERBEROS) , 
> then webhdfs will request getdelegation token which is not implemented and 
> hence it will  throw “ls: Invalid value for webhdfs parameter "op"”.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433104#comment-16433104
 ] 

genericqa commented on HDFS-13324:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} tools in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 38m 
21s{color} | 

[jira] [Commented] (HDFS-13416) TestNodeManager tests fail

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433086#comment-16433086
 ] 

genericqa commented on HDFS-13416:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
10s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-hdds/common generated 0 new + 1 unchanged - 1 
fixed = 1 total (was 2) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 13s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 69 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HDFS-13416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918463/HDFS-13416-HDFS-7240.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 262b1def1f7c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 

[jira] [Updated] (HDFS-12950) [oiv] ls will fail in secure cluster

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12950:
---
Status: Patch Available  (was: Open)

> [oiv] ls will fail in  secure cluster
> -
>
> Key: HDFS-12950
> URL: https://issues.apache.org/jira/browse/HDFS-12950
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-12950.001.patch
>
>
> if we execute ls, it will throw following.
> {noformat}
> hdfs dfs -ls webhdfs://127.0.0.1:5978/
> ls: Invalid value for webhdfs parameter "op"
> {noformat}
> When client is configured with security (i.e "hadoop.security.authentication= 
> KERBEROS) , 
> then webhdfs will request getdelegation token which is not implemented and 
> hence it will  throw “ls: Invalid value for webhdfs parameter "op"”.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-10 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-13399:

Comment: was deleted

(was: Attaching a new patch: whitespace and checkstyle fixes mostly; 
optimizations to {{TestStateAlignmentContextWithHA}} to remove checks for 
"waitActive()" since we are only transitions states no need to wait for block 
reports.

I made the assumption that waitActive() would wait for the node to enter Active 
state but that is handled by {{transitionToActive}}.

No further changes.)

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-10 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433061#comment-16433061
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

Attaching a new patch: whitespace and checkstyle fixes mostly; optimizations to 
{{TestStateAlignmentContextWithHA}} to remove checks for "waitActive()" since 
we are only transitions states no need to wait for block reports.

I made the assumption that waitActive() would wait for the node to enter Active 
state but that is handled by {{transitionToActive}}.

No further changes.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13365) RBF: Adding trace support

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433059#comment-16433059
 ] 

genericqa commented on HDFS-13365:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
16s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13365 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918459/HDFS-13365.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 727f93e6aa14 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8ab776d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23867/testReport/ |
| Max. process+thread count | 945 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23867/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
> 

[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-10 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433055#comment-16433055
 ] 

Erik Krogen commented on HDFS-13399:


Few comments on the v001 patch:
* Looks like the Javadoc for 
{{NameNodeProxiesClient#createProxyWithClientProtocol}} has a link error.
* Test(Async)IPC failures look like they are probably relevant, can you 
investigate?
* {{TestStateAlignmentContextWithHA}} looks good, but I think can be improved a 
bit.
** If the {{assertThat}} statement throws something, the thread will die, but 
the exception will not be bubbled up to the main thread. It is probably better 
instead to have an {{AtomicBoolean}} that is set to some value if a thread sees 
a failing condition, and check for this in the main thread.
** I think you can do something more clean than the current looping approach to 
wait for the threads to exit. Maybe use an {{ExecutorService}} with fixed 
thread pool, {{invokeAll}}, then {{shutdown}}, followed by repeated calls to 
{{awaitTermination(1, TimeUnit.SECONDS)}}. With {{ExecutorService}} you can 
also use a {{Callable}} instead of {{Runnable}} and then have each thread 
return a special value if it encountered an unexpected condition (doing away 
with the {{AtomicBoolean}} mentioned above).
** You have quite a few instances of missing a space between e.g. 
{{for}}/{{while}} and an opening parenthesis.



{quote}
Should we consider creating client and server side configuration for enabling / 
disabling AlignmentContext processing?
{quote}
Is there any advantage to turning it off? Is there any additional overhead? The 
only thing I can think of is increasing the RPC payload size slightly.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433036#comment-16433036
 ] 

genericqa commented on HDFS-13045:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
47s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13045 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918449/HDFS-13045.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 28380d9c9466 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8ab776d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23865/testReport/ |
| Max. process+thread count | 1349 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23865/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: 

[jira] [Commented] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-04-10 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432992#comment-16432992
 ] 

Ajay Kumar commented on HDFS-13197:
---

Addressed some of the checkstyle and 1 findbug warning in patch v2.

> Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007
> --
>
> Key: HDFS-13197
> URL: https://issues.apache.org/jira/browse/HDFS-13197
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13197-HDFS-7240.000.patch, 
> HDFS-13197-HDFS-7240.001.patch, HDFS-13197-HDFS-7240.002.patch, Screen Shot 
> 2018-04-10 at 2.05.35 PM.png
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-04-10 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13197:
--
Attachment: Screen Shot 2018-04-10 at 2.05.35 PM.png

> Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007
> --
>
> Key: HDFS-13197
> URL: https://issues.apache.org/jira/browse/HDFS-13197
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13197-HDFS-7240.000.patch, 
> HDFS-13197-HDFS-7240.001.patch, HDFS-13197-HDFS-7240.002.patch, Screen Shot 
> 2018-04-10 at 2.05.35 PM.png
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-04-10 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13197:
--
Attachment: HDFS-13197-HDFS-7240.002.patch

> Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007
> --
>
> Key: HDFS-13197
> URL: https://issues.apache.org/jira/browse/HDFS-13197
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13197-HDFS-7240.000.patch, 
> HDFS-13197-HDFS-7240.001.patch, HDFS-13197-HDFS-7240.002.patch
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13416) TestNodeManager tests fail

2018-04-10 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432988#comment-16432988
 ] 

Bharat Viswanadham commented on HDFS-13416:
---

Thank You [~nandakumar131] for review.

uploaded patch v01 to address review comments.

> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13416-HDFS-7240.00.patch, 
> HDFS-13416-HDFS-7240.01.patch
>
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300 
> cc [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13416) TestNodeManager tests fail

2018-04-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13416:
--
Attachment: HDFS-13416-HDFS-7240.01.patch

> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13416-HDFS-7240.00.patch, 
> HDFS-13416-HDFS-7240.01.patch
>
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300 
> cc [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12950) [oiv] ls will fail in secure cluster

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432954#comment-16432954
 ] 

Wei-Chiu Chuang commented on HDFS-12950:


Thank you, Brahma.

I posted a patch to make WebImageViewer throw Runtime exception when it detects 
the secure mode is enabled in configuration. Also updated doc and command line 
help to indicate this use case is not supported.

> [oiv] ls will fail in  secure cluster
> -
>
> Key: HDFS-12950
> URL: https://issues.apache.org/jira/browse/HDFS-12950
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-12950.001.patch
>
>
> if we execute ls, it will throw following.
> {noformat}
> hdfs dfs -ls webhdfs://127.0.0.1:5978/
> ls: Invalid value for webhdfs parameter "op"
> {noformat}
> When client is configured with security (i.e "hadoop.security.authentication= 
> KERBEROS) , 
> then webhdfs will request getdelegation token which is not implemented and 
> hence it will  throw “ls: Invalid value for webhdfs parameter "op"”.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240 (move to a different branch)

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432955#comment-16432955
 ] 

genericqa commented on HDFS-13415:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-13415 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13415 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918456/HDFS-13415-HDFS-7240.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23866/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Remove cblock code from HDFS-7240 (move to a different branch)
> -
>
> Key: HDFS-13415
> URL: https://issues.apache.org/jira/browse/HDFS-13415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13415-HDFS-7240.001.patch, 
> HDFS-13415-HDFS-7240.002.patch
>
>
> The Ozone components (hdds/ozone) and cblock components (cblock) has 
> different stability. We suggest to separated the development: Ozone could 
> remain on HDFS-7240 and could be merged to the trunk as voted by the 
> community.
> The cblock development could be kept on a separated feature branch 
> (HDFS-8) and could be developed and merged independently.
> To achieve this we 
>  1. need to remove the cblock code from HDFS-7240 branch (this is what this 
> jira about)
>  2. Create a new branch from the latest HDFS-7240 which contains the cblock 
> server (not in this jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12950) [oiv] ls will fail in secure cluster

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12950:
---
Attachment: HDFS-12950.001.patch

> [oiv] ls will fail in  secure cluster
> -
>
> Key: HDFS-12950
> URL: https://issues.apache.org/jira/browse/HDFS-12950
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-12950.001.patch
>
>
> if we execute ls, it will throw following.
> {noformat}
> hdfs dfs -ls webhdfs://127.0.0.1:5978/
> ls: Invalid value for webhdfs parameter "op"
> {noformat}
> When client is configured with security (i.e "hadoop.security.authentication= 
> KERBEROS) , 
> then webhdfs will request getdelegation token which is not implemented and 
> hence it will  throw “ls: Invalid value for webhdfs parameter "op"”.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13365) RBF: Adding trace support

2018-04-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13365:
---
Attachment: HDFS-13365.006.patch

> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, 
> HDFS-13365.003.patch, HDFS-13365.004.patch, HDFS-13365.005.patch, 
> HDFS-13365.006.patch
>
>
> We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240 (move to a different branch)

2018-04-10 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432940#comment-16432940
 ] 

Elek, Marton commented on HDFS-13415:
-

Thank you [~msingh] to check the patch.

It's hard to grep for cblock as this expression is used at many places in the 
existing hdfs code (...cBlock).

I addressed all the issues in the latest patch (you can see the diff between 
the two patches here: 
https://github.com/elek/hadoop/commit/d77b2a6df77972275077c3f4650bdaf206b053df)

The only exception is the TestConfiguration.java (item 3). Here the cblock is 
just a custom example string to test the configuration. The code itself isn't 
related to the cblock. I would keep it as is, because it would introduce 
additional differences betweeen trunk and HDFS-7240 which could make the merge 
review harder.

> Ozone: Remove cblock code from HDFS-7240 (move to a different branch)
> -
>
> Key: HDFS-13415
> URL: https://issues.apache.org/jira/browse/HDFS-13415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13415-HDFS-7240.001.patch, 
> HDFS-13415-HDFS-7240.002.patch
>
>
> The Ozone components (hdds/ozone) and cblock components (cblock) has 
> different stability. We suggest to separated the development: Ozone could 
> remain on HDFS-7240 and could be merged to the trunk as voted by the 
> community.
> The cblock development could be kept on a separated feature branch 
> (HDFS-8) and could be developed and merged independently.
> To achieve this we 
>  1. need to remove the cblock code from HDFS-7240 branch (this is what this 
> jira about)
>  2. Create a new branch from the latest HDFS-7240 which contains the cblock 
> server (not in this jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12950) [oiv] ls will fail in secure cluster

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-12950:
--

Assignee: Wei-Chiu Chuang  (was: Brahma Reddy Battula)

> [oiv] ls will fail in  secure cluster
> -
>
> Key: HDFS-12950
> URL: https://issues.apache.org/jira/browse/HDFS-12950
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> if we execute ls, it will throw following.
> {noformat}
> hdfs dfs -ls webhdfs://127.0.0.1:5978/
> ls: Invalid value for webhdfs parameter "op"
> {noformat}
> When client is configured with security (i.e "hadoop.security.authentication= 
> KERBEROS) , 
> then webhdfs will request getdelegation token which is not implemented and 
> hence it will  throw “ls: Invalid value for webhdfs parameter "op"”.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432935#comment-16432935
 ] 

Íñigo Goiri commented on HDFS-13237:


[~maobaolong] thanks for the comments.
We introduced those methods in HDFS-13224 and HDFS-13250.

Regarding subclusters being down, I think the operations will fail.
We should add unit tests for those cases.

In  [^HDFS-13237.003.patch], we already add a disclaimer stating we don't have 
consistency.
We could make the wording stronger if you feel so.

> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, 
> HDFS-13237.002.patch, HDFS-13237.003.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240 (move to a different branch)

2018-04-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13415:

Attachment: HDFS-13415-HDFS-7240.002.patch

> Ozone: Remove cblock code from HDFS-7240 (move to a different branch)
> -
>
> Key: HDFS-13415
> URL: https://issues.apache.org/jira/browse/HDFS-13415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13415-HDFS-7240.001.patch, 
> HDFS-13415-HDFS-7240.002.patch
>
>
> The Ozone components (hdds/ozone) and cblock components (cblock) has 
> different stability. We suggest to separated the development: Ozone could 
> remain on HDFS-7240 and could be merged to the trunk as voted by the 
> community.
> The cblock development could be kept on a separated feature branch 
> (HDFS-8) and could be developed and merged independently.
> To achieve this we 
>  1. need to remove the cblock code from HDFS-7240 branch (this is what this 
> jira about)
>  2. Create a new branch from the latest HDFS-7240 which contains the cblock 
> server (not in this jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2018-04-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432927#comment-16432927
 ] 

Arpit Agarwal commented on HDFS-13248:
--

I'm not in favor of doing of modifying the caller context because the context 
string should be opaque to HDFS. Also the logged caller context strings will be 
incorrect if the target NN is running an older version.

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13325) Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService

2018-04-10 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13325:
---
  Resolution: Fixed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

Addressed as part of HDFS-13395.

> Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService
> ---
>
> Key: HDFS-13325
> URL: https://issues.apache.org/jira/browse/HDFS-13325
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Fix For: HDFS-7240
>
> Attachments: HDFS-13325-HDFS-7240.000.patch
>
>
> Based on this comment, we can rename {{ObjectStoreRestPlugin}} to 
> {{OzoneDatanodeService}}. So that the plugin name will be consistant with 
> {{HdslDatanodeService}}. We can also remove {{DataNodeServicePlugin}} and 
> directly use {{ServicePlugin}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432901#comment-16432901
 ] 

Íñigo Goiri commented on HDFS-13045:


I had to add  [^HDFS-13045.004.patch] for rebasing after HDFS-13410 and 
HDFS-13384.
Let's see if Yetus is clean again.

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch, HDFS-13045.004.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13045:
---
Attachment: HDFS-13045.004.patch

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch, HDFS-13045.004.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails

2018-04-10 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432885#comment-16432885
 ] 

Shashikant Banerjee commented on HDFS-13324:


Rebased patch v2.

> Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
> --
>
> Key: HDFS-13324
> URL: https://issues.apache.org/jira/browse/HDFS-13324
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13324-HDFS-7240.000.patch, 
> HDFS-13324-HDFS-7240.001.patch, HDFS-13324-HDFS-7240.002.patch, 
> HDFS-13324-HDFS-7240.003.patch
>
>
> We have removed the dependency of DatanodeID in HDSL/Ozone and there is no 
> need for InfoPort and InfoSecurePort.  It is now safe to remove InfoPort and 
> InfoSecurePort from DatanodeDetails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails

2018-04-10 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13324:
---
Attachment: HDFS-13324-HDFS-7240.003.patch

> Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
> --
>
> Key: HDFS-13324
> URL: https://issues.apache.org/jira/browse/HDFS-13324
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13324-HDFS-7240.000.patch, 
> HDFS-13324-HDFS-7240.001.patch, HDFS-13324-HDFS-7240.002.patch, 
> HDFS-13324-HDFS-7240.003.patch
>
>
> We have removed the dependency of DatanodeID in HDSL/Ozone and there is no 
> need for InfoPort and InfoSecurePort.  It is now safe to remove InfoPort and 
> InfoSecurePort from DatanodeDetails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13330) ShortCircuitCache#fetchOrCreate never retries

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432860#comment-16432860
 ] 

Wei-Chiu Chuang commented on HDFS-13330:


+1. test failures are unrelated.

> ShortCircuitCache#fetchOrCreate never retries
> -
>
> Key: HDFS-13330
> URL: https://issues.apache.org/jira/browse/HDFS-13330
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13330.001.patch, HDFS-13330.002.patch, 
> HDFS-13330.004.patch, HDFS-13330.005.patch
>
>
> The follow do .. while(false) loop seems useless to me. The code intended to 
> retry but it never worked. Let's fix it.
> {code:java:title=ShortCircuitCache#fetchOrCreate}
> ShortCircuitReplicaInfo info = null;
> do {
>   if (closed) {
> LOG.trace("{}: can't fethchOrCreate {} because the cache is closed.",
> this, key);
> return null;
>   }
>   Waitable waitable = replicaInfoMap.get(key);
>   if (waitable != null) {
> try {
>   info = fetch(key, waitable);
> } catch (RetriableException e) {
>   LOG.debug("{}: retrying {}", this, e.getMessage());
> }
>   }
> } while (false);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto

2018-04-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13413:
---
Description: 
ClusterId as well as the DatanodeUuid are optional fields in 
{{SCMRegisteredCmdResponseProto}}

currently. We have to make both clusterId and DatanodeUuid as required field 
and handle it properly. As of now, we don't do anything with the response of 
datanode registration. We should validate the clusterId and also the 
datanodeUuid

> ClusterId and DatanodeUuid should be marked mandatory fileds in 
> SCMRegisteredCmdResponseProto
> -
>
> Key: HDFS-13413
> URL: https://issues.apache.org/jira/browse/HDFS-13413
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: HDFS-7240
>
>
> ClusterId as well as the DatanodeUuid are optional fields in 
> {{SCMRegisteredCmdResponseProto}}
> currently. We have to make both clusterId and DatanodeUuid as required field 
> and handle it properly. As of now, we don't do anything with the response of 
> datanode registration. We should validate the clusterId and also the 
> datanodeUuid



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto

2018-04-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13413:
---
Environment: (was: ClusterId as well as the DatanodeUuid are optional 
fields in SCMRegisteredCmdResponseProto

currently. We have to make both clusterId and DatanodeUuid as required field 
and handle it properly. As of now, we don't do anything with the response of 
datanode registration. We should validate the clusterId and also the 
datanodeUuid)

> ClusterId and DatanodeUuid should be marked mandatory fileds in 
> SCMRegisteredCmdResponseProto
> -
>
> Key: HDFS-13413
> URL: https://issues.apache.org/jira/browse/HDFS-13413
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: HDFS-7240
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13416) TestNodeManager tests fail

2018-04-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432845#comment-16432845
 ] 

Nanda kumar commented on HDFS-13416:


Thanks [~bharatviswa] for reporting and working on this. Overall the patch 
looks good to me, some minor comments

*DatanodeDetails.java*
 {{equals}} method implementation can be simplified to something like
{code:java}
  public boolean equals(Object obj) {
return obj instanceof DatanodeDetails && 
uuid.equals(((DatanodeDetails) obj).uuid);
  }
{code}
and {{hashCode}} to
{code:java}
public int hashCode() {
return uuid.hashCode();
  }
{code}
*SCMNodeManager.java*
 {{sendHeartbeat}}
Since we have a return statement inside the {{if}} construct, we don't need 
else block. Or the whole thing can be done using
 {{Preconditions.checkNotNull(datanodeDetailsProto, "Heartbeat is missing 
DatanodeDetails.")}}

> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13416-HDFS-7240.00.patch
>
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300 
> cc [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13423) Ozone: Clean-up of ozone related change from hadoop-hdfs-project

2018-04-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432787#comment-16432787
 ] 

Nanda kumar commented on HDFS-13423:


{{MiniOzoneClassicCluster}} related changes are handled in HDFS-13424.

> Ozone: Clean-up of ozone related change from hadoop-hdfs-project
> 
>
> Key: HDFS-13423
> URL: https://issues.apache.org/jira/browse/HDFS-13423
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13423-HDFS-7240.000.patch
>
>
> This jira is for tracking the clean-up and rever the changes made in 
> hadoop-hdfs-project which are related to ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13423) Ozone: Clean-up of ozone related change from hadoop-hdfs-project

2018-04-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13423:
---
Status: Patch Available  (was: Open)

> Ozone: Clean-up of ozone related change from hadoop-hdfs-project
> 
>
> Key: HDFS-13423
> URL: https://issues.apache.org/jira/browse/HDFS-13423
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13423-HDFS-7240.000.patch
>
>
> This jira is for tracking the clean-up and rever the changes made in 
> hadoop-hdfs-project which are related to ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13423) Ozone: Clean-up of ozone related change from hadoop-hdfs-project

2018-04-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13423:
---
Attachment: HDFS-13423-HDFS-7240.000.patch

> Ozone: Clean-up of ozone related change from hadoop-hdfs-project
> 
>
> Key: HDFS-13423
> URL: https://issues.apache.org/jira/browse/HDFS-13423
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13423-HDFS-7240.000.patch
>
>
> This jira is for tracking the clean-up and rever the changes made in 
> hadoop-hdfs-project which are related to ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-10 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432781#comment-16432781
 ] 

Wei Yan commented on HDFS-13045:


Sure  +1 on [^HDFS-13045.003.patch]

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432766#comment-16432766
 ] 

Íñigo Goiri commented on HDFS-13045:


OK we can go with this if so.
Is  [^HDFS-13045.003.patch] good to go?

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432760#comment-16432760
 ] 

Hudson commented on HDFS-13363:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13957 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13957/])
HDFS-13363. Record file path when FSDirAclOp throws AclException. (xiao: rev 
e76c2aeb288710ebee39680528dec44e454bbe9e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/AclException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java


> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-10 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432745#comment-16432745
 ] 

Rakesh R commented on HDFS-13328:
-

FYI, attached [^HDFS-13328-05.patch], where I made a very small change. Just 
removed one new line in {{ReencryptionHandler.java}} class in order to make 
cherry pick smooth.

> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch, 
> HDFS-13328-03.patch, HDFS-13328-04.patch, HDFS-13328-05.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13395) Ozone: Plugins support in HDSL Datanode Service

2018-04-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13395:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks, [~nandakumar131] for the contribution and all for the reviews. I've 
committed the patch to the feature branch. 

> Ozone: Plugins support in HDSL Datanode Service
> ---
>
> Key: HDFS-13395
> URL: https://issues.apache.org/jira/browse/HDFS-13395
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13395-HDFS-7240.000.patch, 
> HDFS-13395-HDFS-7240.001.patch, HDFS-13395-HDFS-7240.002.patch
>
>
> As part of Datanode, we start {{HdslDatanodeService}} if {{ozone}} is 
> enabled. We need provision to load plugins like {{Ozone Rest Service}} as 
> part of  {{HdslDatanodeService}} start.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-10 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-13328:

Attachment: HDFS-13328-05.patch

> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch, 
> HDFS-13328-03.patch, HDFS-13328-04.patch, HDFS-13328-05.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-10 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-13328:

   Resolution: Fixed
Fix Version/s: 3.0.3
   3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}}, cherry-picked to {{branch-3.1}} and {{branch-3.0}}.

> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch, 
> HDFS-13328-03.patch, HDFS-13328-04.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13363:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk!

Thanks Wei-Chiu for creating the jira, Daryn for commenting and Gabor for the 
patch!

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13395) Ozone: Plugins support in HDSL Datanode Service

2018-04-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432731#comment-16432731
 ] 

Xiaoyu Yao commented on HDFS-13395:
---

+1 for the v2 patch. I will commit it shortly. 

> Ozone: Plugins support in HDSL Datanode Service
> ---
>
> Key: HDFS-13395
> URL: https://issues.apache.org/jira/browse/HDFS-13395
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13395-HDFS-7240.000.patch, 
> HDFS-13395-HDFS-7240.001.patch, HDFS-13395-HDFS-7240.002.patch
>
>
> As part of Datanode, we start {{HdslDatanodeService}} if {{ozone}} is 
> enabled. We need provision to load plugins like {{Ozone Rest Service}} as 
> part of  {{HdslDatanodeService}} start.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432730#comment-16432730
 ] 

Hudson commented on HDFS-13328:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13956 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13956/])
HDFS-13328. Abstract ReencryptionHandler recursive logic in separate (rakeshr: 
rev f89594f0b80e8efffdcb887daa4a18a2b0a228b3)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeTraverser.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryptionHandler.java


> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch, 
> HDFS-13328-03.patch, HDFS-13328-04.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432713#comment-16432713
 ] 

Xiao Chen commented on HDFS-13363:
--

Failed tests look unrelated. Committing this

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13165) [SPS]: Collects successfully moved block details via IBR

2018-04-10 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432698#comment-16432698
 ] 

Surendra Singh Lilhore commented on HDFS-13165:
---

Thanks [~rakeshr] for patch and thanks [~umamaheswararao] for review.

One minor comment , can we change DNA_BLOCK_STORAGE_MOVEMENT command name to 
DNA_MOVEBLOCK?

 Otherwise LGTM.

> [SPS]: Collects successfully moved block details via IBR
> 
>
> Key: HDFS-13165
> URL: https://issues.apache.org/jira/browse/HDFS-13165
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Attachments: HDFS-13165-HDFS-10285-00.patch, 
> HDFS-13165-HDFS-10285-01.patch, HDFS-13165-HDFS-10285-02.patch, 
> HDFS-13165-HDFS-10285-03.patch, HDFS-13165-HDFS-10285-04.patch, 
> HDFS-13165-HDFS-10285-05.patch, HDFS-13165-HDFS-10285-06.patch, 
> HDFS-13165-HDFS-10285-07.patch, HDFS-13166-HDFS-10285-07.patch
>
>
> This task to make use of the existing IBR to get moved block details and 
> remove unwanted future tracking logic exists in BlockStorageMovementTracker 
> code, this is no more needed as the file level tracking maintained at NN 
> itself.
> Following comments taken from HDFS-10285, 
> [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347472]
> Comment-3)
> {quote}BPServiceActor
> Is it actually sending back the moved blocks? Aren’t IBRs sufficient?{quote}
> Comment-21)
> {quote}
> BlockStorageMovementTracker
> Many data structures are riddled with non-threadsafe race conditions and risk 
> of CMEs.
> Ex. The moverTaskFutures map. Adding new blocks and/or adding to a block's 
> list of futures is synchronized. However the run loop does an unsynchronized 
> block get, unsynchronized future remove, unsynchronized isEmpty, possibly 
> another unsynchronized get, only then does it do a synchronized remove of the 
> block. The whole chunk of code should be synchronized.
> Is the problematic moverTaskFutures even needed? It's aggregating futures 
> per-block for seemingly no reason. Why track all the futures at all instead 
> of just relying on the completion service? As best I can tell:
> It's only used to determine if a future from the completion service should be 
> ignored during shutdown. Shutdown sets the running boolean to false and 
> clears the entire datastructure so why not use the running boolean like a 
> check just a little further down?
> As synchronization to sleep up to 2 seconds before performing a blocking 
> moverCompletionService.take, but only when it thinks there are no active 
> futures. I'll ignore the missed notify race that the bounded wait masks, but 
> the real question is why not just do the blocking take?
> Why all the complexity? Am I missing something?
> BlocksMovementsStatusHandler
> Suffers same type of thread safety issues as StoragePolicySatisfyWorker. Ex. 
> blockIdVsMovementStatus is inconsistent synchronized. Does synchronize to 
> return an unmodifiable list which sadly does nothing to protect the caller 
> from CME.
> handle is iterating over a non-thread safe list.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432642#comment-16432642
 ] 

genericqa commented on HDFS-13197:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} framework in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} framework in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} framework in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} framework in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-hdds: The patch generated 22 new + 0 
unchanged - 0 fixed = 22 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
13s{color} | {color:red} framework in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-hdds/common generated 1 new + 2 unchanged - 0 
fixed = 3 total (was 2) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} framework in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} framework in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 18s{color} 
| {color:red} framework in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 69 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/common |
|  |  Class org.apache.hadoop.hdds.conf.HddsConfServlet defines non-transient 

[jira] [Commented] (HDFS-12861) Track speed in DFSClient

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432623#comment-16432623
 ] 

Íñigo Goiri commented on HDFS-12861:


Thanks [~mf_borge].
For the patches, you should use the naming as HDFS-12861.000.patch and don't 
remove the old ones just to keep track of the progress.

Other minor comments before I go deeper:
* Do you mind fixing the checkstyle issues in  [^HDFS-12861-10-april-18.patch]?
* Do we need to make {{initThreadsNumForHedgedReads()}} non-static?
* startReadTime and endReadTime could be initialized where they get used.
* In DataStreamer we coudl just import 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.
* Can we leave packetProcessingTimes in nanos as the other metric for writes 
already uses nanos?
* Can we add some unit tests?

> Track speed in DFSClient
> 
>
> Key: HDFS-12861
> URL: https://issues.apache.org/jira/browse/HDFS-12861
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: María Fernanda Borge
>Priority: Major
> Attachments: HDFS-12861-10-april-18.patch
>
>
> Sometimes we get slow jobs because of the access to HDFS. However, is hard to 
> tell what is the actual speed. We propose to add a log line with something 
> like:
> {code}
> 2017-11-19 09:55:26,309 INFO [main] hdfs.DFSClient: blk_1107222019_38144502 
> READ 129500B in 7ms 17.6MB/s
> 2017-11-27 19:01:04,141 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792057_86833357 WRITE 131072B in 10ms 12.5MB/s
> 2017-11-27 19:01:14,219 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792069_86833369 WRITE 131072B in 12ms 10.4MB/s
> 2017-11-27 19:01:24,282 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792081_86833381 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:34,330 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792093_86833393 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:44,408 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792105_86833405 WRITE 131072B in 11ms 11.4MB/s
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432621#comment-16432621
 ] 

Wei-Chiu Chuang edited comment on HDFS-13243 at 4/10/18 5:27 PM:
-

Hi Zephyr, thanks for updating the patch.

Unfortunately it looks like TestHFlush.hSyncUpdateLength_00 failed, and it is 
reproducible in my local machine.
{noformat}
java.lang.NullPointerException at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:3279)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.fsync(NameNodeRpcServer.java:1414)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.fsync(ClientNamenodeProtocolServerSideTranslatorPB.java:1026)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
{noformat}
Also, I am still not sure about the client side fix. It basically extends the 
synchronized block to make sure it holds lock while waiting for ack and calling 
fsync at NameNode. It is more intuitive for to me not hold lock while waiting 
for RPCs to return.

That aside, please refrain from using wild card import (DFSOutputStream.java)


was (Author: jojochuang):
Hi Zephyr, thanks for updating the patch.

Unfortunately it looks like TestHFlush.hSyncUpdateLength_00 failed, and it is 
reproducible in my local machine.
{noformat}
java.lang.NullPointerException at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:3279)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.fsync(NameNodeRpcServer.java:1414)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.fsync(ClientNamenodeProtocolServerSideTranslatorPB.java:1026)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
{noformat}
Also, I am still not sure about the client side fix. It basically extends the 
synchronized block to make sure it holds lock while waiting for ack and calling 
fsync at NameNode.

That aside, please refrain from using wild card import (DFSOutputStream.java)

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> 

[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-04-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432621#comment-16432621
 ] 

Wei-Chiu Chuang commented on HDFS-13243:


Hi Zephyr, thanks for updating the patch.

Unfortunately it looks like TestHFlush.hSyncUpdateLength_00 failed, and it is 
reproducible in my local machine.
{noformat}
java.lang.NullPointerException at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:3279)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.fsync(NameNodeRpcServer.java:1414)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.fsync(ClientNamenodeProtocolServerSideTranslatorPB.java:1026)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
{noformat}
Also, I am still not sure about the client side fix. It basically extends the 
synchronized block to make sure it holds lock while waiting for ack and calling 
fsync at NameNode.

That aside, please refrain from using wild card import (DFSOutputStream.java)

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> 

[jira] [Created] (HDFS-13425) Ozone: Clean-up of ozone related change from hadoop-common-project

2018-04-10 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-13425:
--

 Summary: Ozone: Clean-up of ozone related change from 
hadoop-common-project
 Key: HDFS-13425
 URL: https://issues.apache.org/jira/browse/HDFS-13425
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Nanda kumar


This jira is for tracking the clean-up and rever the changes made in 
hadoop-common-project which are related to ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13424) Ozone: Refactor MiniOzoneClassicCluster

2018-04-10 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-13424:
--

 Summary: Ozone: Refactor MiniOzoneClassicCluster
 Key: HDFS-13424
 URL: https://issues.apache.org/jira/browse/HDFS-13424
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Nanda kumar
Assignee: Nanda kumar


This jira will track the refactoring work on {{MiniOzoneClassicCluster}} which 
removes the dependency and changes made in {{MiniDFSCluster}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13423) Ozone: Clean-up of ozone related change from hadoop-hdfs-project

2018-04-10 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-13423:
--

 Summary: Ozone: Clean-up of ozone related change from 
hadoop-hdfs-project
 Key: HDFS-13423
 URL: https://issues.apache.org/jira/browse/HDFS-13423
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Nanda kumar
Assignee: Nanda kumar


This jira is for tracking the clean-up and rever the changes made in 
hadoop-hdfs-project which are related to ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432594#comment-16432594
 ] 

Íñigo Goiri commented on HDFS-13388:


Thanks [~rakeshr] for reporting.
Reverted from branch-3.0 too.
Let me know if there is anywhere else.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-10 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432586#comment-16432586
 ] 

Rakesh R commented on HDFS-13388:
-

Hi [~elgoiri], {{branch-3.0}} also has compilation errors.
{code:java}
[ERROR] 
/C:/Projects/Hdp/branch-3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[79,11]
 cannot find symbol
[ERROR] symbol:   variable currentUsedProxy
{code}
I could see HDFS-12813 changes doesn't exists in branch-3.0 and causes the 
failure. Please consider this branch, while correcting. Thanks!

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13388:
---
Fix Version/s: (was: 3.0.4)

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432575#comment-16432575
 ] 

Íñigo Goiri commented on HDFS-13386:


Just one minor thing on  [^HDFS-13386-005.patch], can we put the javadoc of the 
unit test before the method?

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, 
> HDFS-13386-004.patch, HDFS-13386-005.patch, HDFS-13386.000.patch, 
> HDFS-13386.001.patch, image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology

2018-04-10 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432565#comment-16432565
 ] 

Chen Liang commented on HDFS-13418:
---

Thanks for the discussion [~Tao Jie] and [~linyiqun]! The change in v001 patch 
makes sense to me. Just that we should probably document somewhere how all the 
related config entries work with each other. Any other comments [~linyiqun]?

>  NetworkTopology should be configurable when enable DFSNetworkTopology
> --
>
> Key: HDFS-13418
> URL: https://issues.apache.org/jira/browse/HDFS-13418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HDFS-13418.001.patch
>
>
> In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set 
> DFSNetworkTopology as the default implementation.
> We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in 
> core-site.default. Actually this property does not effect once 
> {{dfs.use.dfs.network.topology}} is true. 
> in {{DatanodeManager}},networkTopology is initialized as 
> {code}
> if (useDfsNetworkTopology) {
>   networktopology = DFSNetworkTopology.getInstance(conf);
> } else {
>   networktopology = NetworkTopology.getInstance(conf);
> }
> {code}
> I think we should still make the NetworkTopology  configurable rather than 
> hard code the implementation since we may need another NetworkTopology impl.
> I am not sure if there is other consideration. Any thought? [~vagarychen] 
> [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-04-10 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13197:
--
Status: Patch Available  (was: Open)

> Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007
> --
>
> Key: HDFS-13197
> URL: https://issues.apache.org/jira/browse/HDFS-13197
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13197-HDFS-7240.000.patch, 
> HDFS-13197-HDFS-7240.001.patch
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-10 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan reassigned HDFS-13045:
--

Assignee: Íñigo Goiri  (was: Wei Yan)

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007

2018-04-10 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13197:
--
Attachment: HDFS-13197-HDFS-7240.001.patch

> Ozone: Fix ConfServlet#getOzoneTags cmd after HADOOP-15007
> --
>
> Key: HDFS-13197
> URL: https://issues.apache.org/jira/browse/HDFS-13197
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13197-HDFS-7240.000.patch, 
> HDFS-13197-HDFS-7240.001.patch
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13410) RBF: Support federation with no subclusters

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432508#comment-16432508
 ] 

Íñigo Goiri commented on HDFS-13410:


Thanks [~linyiqun] for the commit and the review.

> RBF: Support federation with no subclusters
> ---
>
> Key: HDFS-13410
> URL: https://issues.apache.org/jira/browse/HDFS-13410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13410.000.patch, HDFS-13410.001.patch, 
> HDFS-13410.002.patch
>
>
> If the federation has no subclusters the logs have long stack traces. Even 
> though this is not a regular setup for RBF, we should trigger log message.
> An example:
> {code}
> Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.LinkedList.checkElementIndex(LinkedList.java:555)
>   at java.util.LinkedList.get(LinkedList.java:476)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1028)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getDatanodeReport(RouterRpcServer.java:1264)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics.getNodeUsage(FederationMetrics.java:424)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432510#comment-16432510
 ] 

Íñigo Goiri commented on HDFS-13384:


Thanks [~ajayydv] and [~linyiqun] for the reviews.

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch, 
> HDFS-13384.002.patch, HDFS-13384.003.patch, HDFS-13384.004.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-10 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432506#comment-16432506
 ] 

Wei Yan commented on HDFS-13045:


Thanks [~elgoiri] for [^HDFS-13045.003.patch]. Another corner case would be, 
the dst appears more than once in the error message, while replaceFirst would 
only replace the first one. But here the implementation would be more tricky, 
and I don't have a case where the NN would generate such like msg. I'm ok with 
current approach, and we can come back if you find other types of unhandled 
errors in the future.

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432503#comment-16432503
 ] 

Íñigo Goiri edited comment on HDFS-13388 at 4/10/18 3:52 PM:
-

Actually, branch-2 is fine as the rename of the variable is already there.
[~LiJinglun], post a branch-2.9 if you want it there.

Sorry [~Sammi] for the inconvenience.


was (Author: elgoiri):
Actually, branch-2 is fine as the rename of the variable is already there.
[~LiJinglun], post a branch-2.9 if you want it there.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432503#comment-16432503
 ] 

Íñigo Goiri commented on HDFS-13388:


Actually, branch-2 is fine as the rename of the variable is already there.
[~LiJinglun], post a branch-2.9 if you want it there.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13388:
---
Fix Version/s: (was: 2.9.2)

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13279) Datanodes usage is imbalanced if number of nodes per rack is not equal

2018-04-10 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432482#comment-16432482
 ] 

Tao Jie commented on HDFS-13279:


Thank you [~ajayydv] for reply!
1, I set up another jira HDFS-13418, and I will try not to modify any current 
logic.
2, I try to make my point clear. There are 2 approaches to choose the right 
node:
# As you mentioned, first choose the right rack with weight, then choose a node 
from the selected rack.
# As the code in patch,  first choose a node as it used to, then check if it 
need another choosing according to the weight of the rack.

I understand approach 1 could be more accurate, also approach 2 works well as I 
have tested. I prefer approach 2 because it reuse rather than rewrite the 
current logic of choosing nodes.
For the consideration of performance, approach 2 do have some overhead. In a 
certain proportion of choosing, we may choose twice. But I don't think the 
chance will be over 10% in a typical case. For approach 1, I think we should 
have more test for the performance since the choosing logic is totally changed 
and it seems to be heavier than the former logic.


> Datanodes usage is imbalanced if number of nodes per rack is not equal
> --
>
> Key: HDFS-13279
> URL: https://issues.apache.org/jira/browse/HDFS-13279
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.3, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HDFS-13279.001.patch, HDFS-13279.002.patch, 
> HDFS-13279.003.patch, HDFS-13279.004.patch, HDFS-13279.005.patch
>
>
> In a Hadoop cluster, number of nodes on a rack could be different. For 
> example, we have 50 Datanodes in all and 15 datanodes per rack, it would 
> remain 5 nodes on the last rack. In this situation, we find that storage 
> usage on the last 5 nodes would be much higher than other nodes.
>  With the default blockplacement policy, for each block, the first 
> replication has the same probability to write to each datanode, but the 
> probability for the 2nd/3rd replication to write to the last 5 nodes would 
> much higher than to other nodes. 
>  Consider we write 50 blocks to such 50 datanodes. The first rep of 100 block 
> would distirbuted to 50 node equally. The 2rd rep of blocks which the 1st rep 
> is on rack1(15 reps) would send equally to other 35 nodes and each nodes 
> receive 0.428 rep. So does blocks on rack2 and rack3. As a result, node on 
> rack4(5 nodes) would receive 1.29 replications in all, while other node would 
> receive 0.97 reps.
> ||-||Rack1(15 nodes)||Rack2(15 nodes)||Rack3(15 nodes)||Rack4(5 nodes)||
> |From rack1|-|15/35=0.43|0.43|0.43|
> |From rack2|0.43|-|0.43|0.43|
> |From rack3|0.43|0.43|-|0.43|
> |From rack4|5/45=0.11|0.11|0.11|-|
> |Total|0.97|0.97|0.97|1.29|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7101) Potential null dereference in DFSck#doWork()

2018-04-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-7101:

Description: 
{code}
String lastLine = null;
int errCode = -1;
try {
  while ((line = input.readLine()) != null) {
...
if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
  errCode = 0;
{code}
If the input stream is empty, lastLine is kept null, leading to NPE.

  was:
{code}
String lastLine = null;
int errCode = -1;
try {
  while ((line = input.readLine()) != null) {
...
if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
  errCode = 0;
{code}
If readLine() throws exception, lastLine may be null, leading to NPE.


> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If the input stream is empty, lastLine is kept null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2018-04-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432438#comment-16432438
 ] 

Akira Ajisaka commented on HDFS-7101:
-

LGTM, +1

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432428#comment-16432428
 ] 

Íñigo Goiri commented on HDFS-13388:


Let me revert it from branch-2 and branch-2.9.
[~LiJinglun], if you want this there, post a branch-2 patch. 

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432424#comment-16432424
 ] 

genericqa commented on HDFS-1:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} common in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} common in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| 

[jira] [Commented] (HDFS-13419) client can communicate to server even if hdfs delegation token expired

2018-04-10 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432402#comment-16432402
 ] 

Kihwal Lee commented on HDFS-13419:
---

Yes, a token is checked on connection establishment. If the connection stays 
up, token expiration won't affect on-going requests. 
Do you have a use case that require connection to be closed by server on token 
expiration?

> client can communicate to server even if hdfs delegation token expired
> --
>
> Key: HDFS-13419
> URL: https://issues.apache.org/jira/browse/HDFS-13419
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangqiang.shen
>Priority: Minor
>
> i was testing hdfs delegation token expired problem use spark streaming, if i 
> set my batch interval small than 10 sec, my spark streaming program will not 
> dead, but if batch interval was setted bigger than 10 sec, the spark 
> streaming program will dead because of hdfs delegation token expire problem, 
> and the exception as follows
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 14042 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>   at com.sun.proxy.$Proxy11.getListing(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:554)
>   at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy12.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1969)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1952)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:693)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
>   at com.envisioncn.arch.App$2$1.call(App.java:120)
>   at com.envisioncn.arch.App$2$1.call(App.java:91)
>   at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
>   at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)
>   at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
>   at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
>   at org.apache.spark.scheduler.Task.run(Task.scala:86)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> the spark streaming program only call FileSystem.listStatus function in every 
> batch
> {code:java}
> FileSystem fs = FileSystem.get(new Configuration());
> FileStatus[] status =  fs.listStatus(new Path("/"));
> for(FileStatus status1 : status){
> System.out.println(status1.getPath());
> }
> {code}
> and i found when hadoop client send rpc request to server, it will first get 
> a connection object and set up the connection if the connection dose not 
> exists.And  it will get a SaslRpcClient to connect to the server side in the 
> connection setup stage.Also server will authenticate the client at the 
> 

[jira] [Commented] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-10 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432397#comment-16432397
 ] 

Rakesh R commented on HDFS-13328:
-

Thank you [~xiaochen] for the useful reviews.
 Thank you [~surendrasingh] for the patch contribution.

I will commit this shortly to branches - {{trunk}}, {{branch-3.0}} and 
{{branch-3.1}}.

> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch, 
> HDFS-13328-03.patch, HDFS-13328-04.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13398) Hdfs recursive listing operation is very slow

2018-04-10 Thread Ajay Sachdev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Sachdev updated HDFS-13398:

Attachment: parallelfsPatch

> Hdfs recursive listing operation is very slow
> -
>
> Key: HDFS-13398
> URL: https://issues.apache.org/jira/browse/HDFS-13398
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13420) License header is displayed in ArchivalStorage/MemoryStorage html pages

2018-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432374#comment-16432374
 ] 

Hudson commented on HDFS-13420:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13953 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13953/])
HDFS-13420. License header is displayed in ArchivalStorage/MemoryStorage (wwei: 
rev 6729047a8ba273d27edcc6a1a9d397a096f44d84)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/MemoryStorage.md


> License header is displayed in ArchivalStorage/MemoryStorage html pages
> ---
>
> Key: HDFS-13420
> URL: https://issues.apache.org/jira/browse/HDFS-13420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13420.01.patch, HDFS-13420.02.patch
>
>
> Apache License header is wrongly displayed in 
> http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13420) License header is displayed in ArchivalStorage/MemoryStorage html pages

2018-04-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13420:
-
Fix Version/s: (was: 3.0.2)
   3.0.3

> License header is displayed in ArchivalStorage/MemoryStorage html pages
> ---
>
> Key: HDFS-13420
> URL: https://issues.apache.org/jira/browse/HDFS-13420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13420.01.patch, HDFS-13420.02.patch
>
>
> Apache License header is wrongly displayed in 
> http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13420) License header is displayed in ArchivalStorage/MemoryStorage html pages

2018-04-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432346#comment-16432346
 ] 

Akira Ajisaka commented on HDFS-13420:
--

Thanks!

> License header is displayed in ArchivalStorage/MemoryStorage html pages
> ---
>
> Key: HDFS-13420
> URL: https://issues.apache.org/jira/browse/HDFS-13420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-13420.01.patch, HDFS-13420.02.patch
>
>
> Apache License header is wrongly displayed in 
> http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13420) License header is displayed in ArchivalStorage/MemoryStorage html pages

2018-04-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432338#comment-16432338
 ] 

Weiwei Yang commented on HDFS-13420:


Committed to trunk, cherry-picked to branch-3.1 and branch-3.0. Thanks 
[~ajisakaa].

> License header is displayed in ArchivalStorage/MemoryStorage html pages
> ---
>
> Key: HDFS-13420
> URL: https://issues.apache.org/jira/browse/HDFS-13420
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
> Fix For: 3.0.2, 3.2.0, 3.1.1
>
> Attachments: HDFS-13420.01.patch, HDFS-13420.02.patch
>
>
> Apache License header is wrongly displayed in 
> http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >