[jira] [Commented] (HDDS-8) Add OzoneManager Delegation Token support

2018-10-01 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16635074#comment-16635074
 ] 

Ajay Kumar commented on HDDS-8:
---

patch v12 with following change to  {{OzoneManager}}:
 * removes the privateKey stub method form {{OzoneManager}}. 
 * will throw exception if private/public key file doesn't exist.

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch, HDDS-8-HDDS-4.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-8) Add OzoneManager Delegation Token support

2018-10-01 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-8:
--
Attachment: HDDS-8-HDDS-4.12.patch

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch, HDDS-8-HDDS-4.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13877:
--
Status: In Progress  (was: Patch Available)

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13877:
--
Attachment: HDFS-13877.003.patch
Status: Patch Available  (was: In Progress)

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13877 stopped by Siyao Meng.
-
> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13877 started by Siyao Meng.
-
> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-10-01 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16635044#comment-16635044
 ] 

Ayush Saxena commented on HDFS-13945:
-

Thanx [~knanasi] for the comment.
 IIUC The heartbeat check function has much to do with the scenario of dead or 
decommissioning datanodes and doesn't seems to be serving as such any good 
purpose in favor of our test even before.As what we require is the reporting of 
disk error which would be done as part of the write process and the count which 
we would get as part of the next heart beat and this much will solve our 
purpose and to deal with the heartbeat interval we were here already very well 
equipped with the supplier which would provide us the liberty to wait for the 
correct heartbeat.So I don't think there was or is any need of the 
heartbeatCheck() function
 If there seems to be any other related purpose being handled by it which I 
might have missed.We can for sure add it back. :)

> TestDataNodeVolumeFailure is Flaky
> --
>
> Key: HDFS-13945
> URL: https://issues.apache.org/jira/browse/HDFS-13945
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13945-01.patch, HDFS-13945-02.patch, 
> HDFS-13945-03.patch
>
>
> The test is failing in trunk since long.
> Reference -
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25135/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25133/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25104/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
>  
> Stack Trace -
>  
> Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
> 03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
> timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server 
> handler 3 on 34285" daemon prio=5 tid=2646 timed_waiting 
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
> "org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
> tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> java.lang.Thread.sleep(Native Method) at 
> org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) 
> at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
> tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
> sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
> sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
> org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
> org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
> "org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
> prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolEx

[jira] [Commented] (HDFS-13947) Review of DirectoryScanner Class

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16635025#comment-16635025
 ] 

Hadoop QA commented on HDFS-13947:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 10 new + 663 unchanged - 49 fixed = 673 total (was 712) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}205m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13947 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942056/HDFS-13947.3.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ead8906f7a1a 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f6c5ef9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25179/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.

[jira] [Commented] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-10-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634848#comment-16634848
 ] 

Hudson commented on HDFS-13768:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15090 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15090/])
HDFS-13768. Adding replicas to volume map makes DataNode start slowly. (yqlin: 
rev 5689355783de005ebc604f4403dc5129a286bfca)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java


>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.07.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000

[jira] [Commented] (HDFS-13949) Correct the description of dfs.datanode.disk.check.timeout in hdfs-default.xml

2018-10-01 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634843#comment-16634843
 ] 

Toshihiro Suzuki commented on HDFS-13949:
-

I just attached a patch to correct the description. Could someone please review 
it?

> Correct the description of dfs.datanode.disk.check.timeout in hdfs-default.xml
> --
>
> Key: HDFS-13949
> URL: https://issues.apache.org/jira/browse/HDFS-13949
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Toshihiro Suzuki
>Priority: Minor
> Attachments: HDFS-13949.1.patch
>
>
> The description of dfs.datanode.disk.check.timeout in hdfs-default.xml is as 
> follows:
> {code}
> 
>   dfs.datanode.disk.check.timeout
>   10m
>   
> Maximum allowed time for a disk check to complete during DataNode
> startup. If the check does not complete within this time interval
> then the disk is declared as failed. This setting supports
> multiple time unit suffixes as described in dfs.heartbeat.interval.
> If no suffix is specified then milliseconds is assumed.
>   
> 
> {code}
> I don't think the value of this config is used only during DataNode startup. 
> I think it's used whenever checking volumes.
> The description is misleading so we need to correct it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13949) Correct the description of dfs.datanode.disk.check.timeout in hdfs-default.xml

2018-10-01 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HDFS-13949:

Attachment: HDFS-13949.1.patch

> Correct the description of dfs.datanode.disk.check.timeout in hdfs-default.xml
> --
>
> Key: HDFS-13949
> URL: https://issues.apache.org/jira/browse/HDFS-13949
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Toshihiro Suzuki
>Priority: Minor
> Attachments: HDFS-13949.1.patch
>
>
> The description of dfs.datanode.disk.check.timeout in hdfs-default.xml is as 
> follows:
> {code}
> 
>   dfs.datanode.disk.check.timeout
>   10m
>   
> Maximum allowed time for a disk check to complete during DataNode
> startup. If the check does not complete within this time interval
> then the disk is declared as failed. This setting supports
> multiple time unit suffixes as described in dfs.heartbeat.interval.
> If no suffix is specified then milliseconds is assumed.
>   
> 
> {code}
> I don't think the value of this config is used only during DataNode startup. 
> I think it's used whenever checking volumes.
> The description is misleading so we need to correct it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-10-01 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13768:
-
Fix Version/s: 3.1.2
   3.2.0

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.07.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
> {noformat}
> One improvement maybe we can use ForkJoinPool to do this recursive task, 
> rather than a sync way. This will be a great improvement because it can 
> greatly speed up recovery process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---

[jira] [Commented] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-10-01 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634838#comment-16634838
 ] 

Yiqun Lin commented on HDFS-13768:
--

I have committed this to trunk and branch-3.1, but there are some conflicts 
when backporting to branch-2. [~surendrasingh], would you mind attaching the 
patch for branch-2? I think this will be nice to have in 2.x versions.

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.07.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
> {noformat}
> One improvement maybe we can use ForkJoinPool to do this recursive task, 
> rat

[jira] [Created] (HDFS-13949) Correct the description of dfs.datanode.disk.check.timeout in hdfs-default.xml

2018-10-01 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created HDFS-13949:
---

 Summary: Correct the description of 
dfs.datanode.disk.check.timeout in hdfs-default.xml
 Key: HDFS-13949
 URL: https://issues.apache.org/jira/browse/HDFS-13949
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Toshihiro Suzuki


The description of dfs.datanode.disk.check.timeout in hdfs-default.xml is as 
follows:
{code}

  dfs.datanode.disk.check.timeout
  10m
  
Maximum allowed time for a disk check to complete during DataNode
startup. If the check does not complete within this time interval
then the disk is declared as failed. This setting supports
multiple time unit suffixes as described in dfs.heartbeat.interval.
If no suffix is specified then milliseconds is assumed.
  

{code}

I don't think the value of this config is used only during DataNode startup. I 
think it's used whenever checking volumes.
The description is misleading so we need to correct it.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-10-01 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634835#comment-16634835
 ] 

Yiqun Lin commented on HDFS-13768:
--

The failed UTs are not related. +1. Committing this.

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.07.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
> {noformat}
> One improvement maybe we can use ForkJoinPool to do this recursive task, 
> rather than a sync way. This will be a great improvement because it can 
> greatly speed up recovery process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634829#comment-16634829
 ] 

Hadoop QA commented on HDDS-564:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} docker {color} | {color:blue}  0m  
5s{color} | {color:blue} Dockerfile 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/sourcedir/dev-support/docker/Dockerfile'
 not found, falling back to built-in. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  8m 
10s{color} | {color:red} Docker failed to build yetus/hadoop:date2018-10-02. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-564 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942054/HDDS-564-docker-hadoop-runner.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1260/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-566) Move OzoneSecure docker-compose after HDDS-447

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634826#comment-16634826
 ] 

Hadoop QA commented on HDDS-566:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
19s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
11s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m  
5s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
34s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-566 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942052/HDDS-566-HDDS-4.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  shellcheck  shelldocs  pylint  |
| uname | Linux afc05ef95503 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 9363d8f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1259/artifact/out/patch-mvninstall-hadoop-

[jira] [Commented] (HDFS-13943) [JDK10] Fix javadoc errors in hadoop-hdfs-client module

2018-10-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634820#comment-16634820
 ] 

Hudson commented on HDFS-13943:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15089 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15089/])
HDFS-13943. [JDK10] Fix javadoc errors in hadoop-hdfs-client module. (tasanuma: 
rev f6c5ef9903dba5eb268997110ef169125327c2c8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/ByteArrayManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DfsClientShmManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/XAttr.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ReconfigurationProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AddBlockFlag.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java


> [JDK10] Fix javadoc errors in hadoop-hdfs-client module
> ---
>
> Key: HDFS-13943
> URL: https://issues.apache.org/jira/browse/HDFS-13943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13943.01.patch, HDFS-13943.02.patch
>
>
> There are 85 errors in hadoop-hdfs-client module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634811#comment-16634811
 ] 

Hadoop QA commented on HDFS-13877:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
442 unchanged - 1 fixed = 445 total (was 443) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
2s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942044/HDFS-13877.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e7cb02c7a0a3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d08219 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS

[jira] [Updated] (HDFS-13943) [JDK10] Fix javadoc errors in hadoop-hdfs-client module

2018-10-01 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13943:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> [JDK10] Fix javadoc errors in hadoop-hdfs-client module
> ---
>
> Key: HDFS-13943
> URL: https://issues.apache.org/jira/browse/HDFS-13943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13943.01.patch, HDFS-13943.02.patch
>
>
> There are 85 errors in hadoop-hdfs-client module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13943) [JDK10] Fix javadoc errors in hadoop-hdfs-client module

2018-10-01 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634807#comment-16634807
 ] 

Takanobu Asanuma commented on HDFS-13943:
-

Committed to trunk. Thanks for the contribution, [~ajisakaa]!

> [JDK10] Fix javadoc errors in hadoop-hdfs-client module
> ---
>
> Key: HDFS-13943
> URL: https://issues.apache.org/jira/browse/HDFS-13943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13943.01.patch, HDFS-13943.02.patch
>
>
> There are 85 errors in hadoop-hdfs-client module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13943) [JDK10] Fix javadoc errors in hadoop-hdfs-client module

2018-10-01 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634805#comment-16634805
 ] 

Takanobu Asanuma commented on HDFS-13943:
-

The checkstyle warnings are mainly caused by not using javadoc for ascii arts. 
I don't think they are problems. +1. Will commit it later.

> [JDK10] Fix javadoc errors in hadoop-hdfs-client module
> ---
>
> Key: HDFS-13943
> URL: https://issues.apache.org/jira/browse/HDFS-13943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13943.01.patch, HDFS-13943.02.patch
>
>
> There are 85 errors in hadoop-hdfs-client module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1915) fuse-dfs does not support append

2018-10-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634806#comment-16634806
 ] 

Wei-Chiu Chuang commented on HDFS-1915:
---

[~pranay_singh] thanks for the patch.
However I think the thing you wanted to fix is unrelated to the OP's issue. 
Plus I don't think the behavior of the dfsclient is wrong. If something 
triggers pipeline recovery, which means the datanode is bad, then the DN should 
get removed. If the pipeline has just a single DN, then it'll not be able to 
write.

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDDS-214) HDDS/Ozone First Release

2018-10-01 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton closed HDDS-214.
-

Hadoop Ozone 0.2.1-alpha is released.

> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13947) Review of DirectoryScanner Class

2018-10-01 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13947:
---
Status: Patch Available  (was: Open)

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch, 
> HDFS-13947.3.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13947) Review of DirectoryScanner Class

2018-10-01 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13947:
---
Attachment: HDFS-13947.3.patch

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch, 
> HDFS-13947.3.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13947) Review of DirectoryScanner Class

2018-10-01 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13947:
---
Status: Open  (was: Patch Available)

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-564 started by Namit Maheshwari.
-
> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-564 stopped by Namit Maheshwari.
-
> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Attachment: (was: HDDS-564..001.patch)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-564 started by Namit Maheshwari.
-
> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Attachment: HDDS-564-docker-hadoop-runner.001.patch
Status: Patch Available  (was: In Progress)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Status: Open  (was: Patch Available)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-558:
--
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

This is actually the correct behavior. 

In [KeyManagerImpl.java|http://keymanagerimpl.java/] openKey method is called 
first, and then eventually commitKey is called.

CreateTime and ModifyTime can be different. 

> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-558.001.patch
>
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13947) Review of DirectoryScanner Class

2018-10-01 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634777#comment-16634777
 ] 

Kitti Nanasi commented on HDFS-13947:
-

Thanks [~belugabehr] for the patch and [~elgoiri] for the review.

I have some comments:
 - I know it is an existing and minor thing, but 
DirectoryScanner#Stats#toString does not have spaces after the ":" sign.
 - In DirectoryScanner#getDiskReport why is a null added to compilersInProgress 
in case of provided storage type if the null will be ignored later on?
 - I think it would be clearer to change the default for 
DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT in another jira, 
so this one can contain only the refactoring work.

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-566) Move OzoneSecure docker-compose after HDDS-447

2018-10-01 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-566:

Attachment: HDDS-566-HDDS-4.001.patch

> Move OzoneSecure docker-compose after HDDS-447
> --
>
> Key: HDDS-566
> URL: https://issues.apache.org/jira/browse/HDDS-566
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-566-HDDS-4.001.patch
>
>
> After HDDS-447. The docker-compose has been moved from hadoop-dist to 
> hadoop-ozone/dist, this ticket is opened to move the secure docker compose 
> added from HDDS-547 to new locations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-566) Move OzoneSecure docker-compose after HDDS-447

2018-10-01 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-566:

Status: Patch Available  (was: Open)

> Move OzoneSecure docker-compose after HDDS-447
> --
>
> Key: HDDS-566
> URL: https://issues.apache.org/jira/browse/HDDS-566
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-566-HDDS-4.001.patch
>
>
> After HDDS-447. The docker-compose has been moved from hadoop-dist to 
> hadoop-ozone/dist, this ticket is opened to move the secure docker compose 
> added from HDDS-547 to new locations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-566) Move OzoneSecure docker-compose after HDDS-447

2018-10-01 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634752#comment-16634752
 ] 

Xiaoyu Yao commented on HDDS-566:
-

simple rename patch. 

> Move OzoneSecure docker-compose after HDDS-447
> --
>
> Key: HDDS-566
> URL: https://issues.apache.org/jira/browse/HDDS-566
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-566-HDDS-4.001.patch
>
>
> After HDDS-447. The docker-compose has been moved from hadoop-dist to 
> hadoop-ozone/dist, this ticket is opened to move the secure docker compose 
> added from HDDS-547 to new locations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-566) Move OzoneSecure docker-compose after HDDS-447

2018-10-01 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-566:
---

 Summary: Move OzoneSecure docker-compose after HDDS-447
 Key: HDDS-566
 URL: https://issues.apache.org/jira/browse/HDDS-566
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


After HDDS-447. The docker-compose has been moved from hadoop-dist to 
hadoop-ozone/dist, this ticket is opened to move the secure docker compose 
added from HDDS-547 to new locations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-01 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634745#comment-16634745
 ] 

Kitti Nanasi commented on HDFS-13926:
-

Thanks for reporting this and for providing the patch, [~xiaochen]!

The patch looks good to me, the new test fails without the patch and passes 
with it and the precommit test failures do not seem related.

However there are some checkstyle warnings which should be fixed.

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634742#comment-16634742
 ] 

Íñigo Goiri commented on HDFS-12284:


[~zhengxg3], are you available for this?
Otherwise, I can do the rebase and the JMX update.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff not completely implemented for supporting WebHdfs

2018-10-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634726#comment-16634726
 ] 

Wei-Chiu Chuang commented on HDFS-13916:


The use of dfs.getSnapshotDiffReport and webHdfs.getSnapshotDiffReport should 
be discouraged.
We should use the iterator based snapshot diff method (HDFS-12594) instead 
(even though there are a few minor bugs in its implementation)

Let's file a new jira for that.

> Distcp SnapshotDiff not completely implemented for supporting WebHdfs
> -
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: 
> https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility 
> to make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13052) WebHDFS: Add support for snasphot diff

2018-10-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634725#comment-16634725
 ] 

Wei-Chiu Chuang commented on HDFS-13052:


Additionally I found a few minor bugs in HDFS-12594 in its iterator 
implementation today when reviewing it. Will file a new jira soon too.

> WebHDFS: Add support for snasphot diff
> --
>
> Key: HDFS-13052
> URL: https://issues.apache.org/jira/browse/HDFS-13052
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: snapshot, webhdfs
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13052.001.patch, HDFS-13052.002.patch, 
> HDFS-13052.003.patch, HDFS-13052.004.patch, HDFS-13052.005.patch, 
> HDFS-13052.006.patch, HDFS-13052.007.patch
>
>
> This Jira aims to implement snapshot diff operation for webHdfs filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-562) Create acceptance test to test aws cli with the s3 gateway

2018-10-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634724#comment-16634724
 ] 

Hudson commented on HDDS-562:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15088 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15088/])
HDDS-562. Create acceptance test to test aws cli with the s3 gateway. (bharat: 
rev 7d082193d2c55b89210b277cd9a0dc2f4e590bee)
* (add) hadoop-ozone/dist/src/main/smoketest/s3/awscli.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/test.sh


> Create acceptance test to test aws cli with the s3 gateway
> --
>
> Key: HDDS-562
> URL: https://issues.apache.org/jira/browse/HDDS-562
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.2
>
> Attachments: HDDS-562.001.patch
>
>
> We can test the endpoint in multiple ways:
> 1. with unit test with mocking the OzoneClient
> 2. With acceptance test using awscli/s3cmd and other s3 compatible tools
> 3. Using the test suite of the hadoop s3a connector
> In this issue I create a simple template to test with awscli (2.) which could 
> be improved during the implementation of the various endpoints. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13052) WebHDFS: Add support for snasphot diff

2018-10-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634722#comment-16634722
 ] 

Wei-Chiu Chuang commented on HDFS-13052:


[~renxunsaky] thanks for the tips. I actually didn't realize your patch is 
related as well.
Hmm after a quick review of HDFS-13916 and the use of getSnapshotDiffReport in 
DistCpSync, it doesn't look like a simple change. I would vote for a new jira 
after HDFs-13916.

> WebHDFS: Add support for snasphot diff
> --
>
> Key: HDFS-13052
> URL: https://issues.apache.org/jira/browse/HDFS-13052
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: snapshot, webhdfs
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13052.001.patch, HDFS-13052.002.patch, 
> HDFS-13052.003.patch, HDFS-13052.004.patch, HDFS-13052.005.patch, 
> HDFS-13052.006.patch, HDFS-13052.007.patch
>
>
> This Jira aims to implement snapshot diff operation for webHdfs filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-8) Add OzoneManager Delegation Token support

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634716#comment-16634716
 ] 

Hadoop QA commented on HDDS-8:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
18s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
39s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
9s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
31s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 23m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m  3s{color} | {color:orange} root: The patch generated 2 new + 8 unchanged - 
11 fixed = 10 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {c

[jira] [Commented] (HDDS-520) Implement HeadBucket REST endpoint

2018-10-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634700#comment-16634700
 ] 

Bharat Viswanadham commented on HDDS-520:
-

Thank You [~elek] for the review and test using aws cli for HEAD bucket rest 
endpoint.

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-562) Create acceptance test to test aws cli with the s3 gateway

2018-10-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-562:

   Resolution: Fixed
Fix Version/s: 0.2.2
   Status: Resolved  (was: Patch Available)

Thank You, [~elek] for the fix.

I have committed to trunk.

> Create acceptance test to test aws cli with the s3 gateway
> --
>
> Key: HDDS-562
> URL: https://issues.apache.org/jira/browse/HDDS-562
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.2
>
> Attachments: HDDS-562.001.patch
>
>
> We can test the endpoint in multiple ways:
> 1. with unit test with mocking the OzoneClient
> 2. With acceptance test using awscli/s3cmd and other s3 compatible tools
> 3. Using the test suite of the hadoop s3a connector
> In this issue I create a simple template to test with awscli (2.) which could 
> be improved during the implementation of the various endpoints. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634688#comment-16634688
 ] 

Siyao Meng commented on HDFS-13877:
---

Uploaded patch rev 002 addressing comments by [~ljain].

Regarding checkstyle:
1. Fixed Javadoc warning, removed unused import, fixed line length.
2. Ignoring JsonUtilClient "Utility classes should not have a public or default 
constructor" warning since the class need to be public for HttpFS to use 
methods in it (written previously for WebHDFS) without code refactoring.
3. Ignoring HttpFSServer "Avoid nested blocks" warning because previous code is 
written in the same (nested block) style.


> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13877:
--
Attachment: HDFS-13877.002.patch
Status: Patch Available  (was: In Progress)

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13877:
--
Status: In Progress  (was: Patch Available)

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-562) Create acceptance test to test aws cli with the s3 gateway

2018-10-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634686#comment-16634686
 ] 

Bharat Viswanadham commented on HDDS-562:
-

Thank You [~elek] for reporting and providing a fix for this issue.

+1 LGTM.

I will commit this patch shortly. This will provide subtasks under HDDS-434, to 
add an acceptance-test for their implementations.

> Create acceptance test to test aws cli with the s3 gateway
> --
>
> Key: HDDS-562
> URL: https://issues.apache.org/jira/browse/HDDS-562
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-562.001.patch
>
>
> We can test the endpoint in multiple ways:
> 1. with unit test with mocking the OzoneClient
> 2. With acceptance test using awscli/s3cmd and other s3 compatible tools
> 3. Using the test suite of the hadoop s3a connector
> In this issue I create a simple template to test with awscli (2.) which could 
> be improved during the implementation of the various endpoints. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-01 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634685#comment-16634685
 ] 

CR Hota commented on HDFS-12284:


Hi [~elgoiri] [~zhengxg3],

Gentle reminder !

Could you help upload the patch to branch HDFS-13532? I will work on the DT 
part out of this new branch.

Please let me know if you think I can help here?

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634667#comment-16634667
 ] 

Hadoop QA commented on HDFS-13944:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
20s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13944 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942031/HDFS-13944.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ceb75f50c108 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4eff629 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25177/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25177/testReport/ |
| Max. process+thread count | 1006 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-projec

[jira] [Commented] (HDFS-7717) Erasure Coding: distribute replication to EC conversion work to DataNode

2018-10-01 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634657#comment-16634657
 ] 

Kitti Nanasi commented on HDFS-7717:


Thanks for the discussion, [~Sammi] and [~andrew.wang]. [~Sammi], if you are 
not working on this, I would be happy to take this on.

I have some comments about the proposed solutions.

1. Distcp: I agree that it is not an efficient solution and also it does not 
take care of deleting the old replicated blocks, which is counterproductive for 
anyone who wants to use erasure coding to what it’s for, to reduce the storage 
space.

2. Tool like mover: There is one more thing to consider about this solution, 
that if the data nodes are finished converting the files, the namenode has to 
be notified by the tool to swap the file’s metadata to the erasure coded one 
and the old replicas have to be removed, which adds more complexity to this 
solution. Failure handling will be also more difficult to implement in this 
case. However the name node performance would not be affected that much.

3. Inside name node: In this case the change will impact the name node 
performance, but the implementation would be more simple, like failure handling 
and deleting the old replicas. Also it would be easier to handle exceptional 
cases, because all the information is already in the name node. For example if 
one node was in maintenance state when the conversion happened then when the 
node leaves the maintenance state it will have a replica for the file, which is 
already erasure coded.

4. Like SPS: The fourth solution would be to implement it like SPS, both as an 
external tool and as a name node daemon thread, so the user can decide which 
one to use. If I understand correctly when SPS was designed, it was considered 
that it will be a model for future tools, like this EC converter.

I would prefer to implement it the SPS way, because then it can be run both 
inside and outside of the name node, however it would be more difficult to 
implement than implementing it only in the name node.

Now that we know more about both SPS and the EC use cases, I would like to hear 
what everyone thinks about this.

> Erasure Coding: distribute replication to EC conversion work to DataNode
> 
>
> Key: HDFS-7717
> URL: https://issues.apache.org/jira/browse/HDFS-7717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Sammi Chen
>Priority: Major
>
> In *stripping* erasure coding case, we need some approach to distribute 
> conversion work between replication and stripping erasure coding to DataNode. 
> It can be NameNode, or a tool utilizing MR just like the current distcp, or 
> another one like the balancer/mover. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634640#comment-16634640
 ] 

Dinesh Chitlangia commented on HDDS-558:


[~nmaheshwari] - Thank you for working on this issue. 

Minor comments:
 # Please add integration test to check that creationTime and modificationTime 
is same when creating a key. This will ensure that in future, if any one 
modified the implementation, the test failure will force them to correct it.
 # Replace the comment

{noformat}
// HDDS-558 creationTime and modificationTime should be same while creating a 
key{noformat}
with
{noformat}
//creationTime and modificationTime should be same while creating a 
key{noformat}
Typically, we try to avoid mentioning Jira ID in the code.

> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-558.001.patch
>
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634640#comment-16634640
 ] 

Dinesh Chitlangia edited comment on HDDS-558 at 10/1/18 9:12 PM:
-

[~nmaheshwari] - Thank you for working on this issue. Overall it LGTM.

Minor comments:
 # Please add integration test to check that creationTime and modificationTime 
is same when creating a key. This will ensure that in future, if any one 
modified the implementation, the test failure will force them to correct it.
 # Replace the comment

{noformat}
// HDDS-558 creationTime and modificationTime should be same while creating a 
key{noformat}
with
{noformat}
//creationTime and modificationTime should be same while creating a 
key{noformat}
Typically, we try to avoid mentioning Jira ID in the code.


was (Author: dineshchitlangia):
[~nmaheshwari] - Thank you for working on this issue. 

Minor comments:
 # Please add integration test to check that creationTime and modificationTime 
is same when creating a key. This will ensure that in future, if any one 
modified the implementation, the test failure will force them to correct it.
 # Replace the comment

{noformat}
// HDDS-558 creationTime and modificationTime should be same while creating a 
key{noformat}
with
{noformat}
//creationTime and modificationTime should be same while creating a 
key{noformat}
Typically, we try to avoid mentioning Jira ID in the code.

> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-558.001.patch
>
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-525) Support virtual-hosted style URLs

2018-10-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634633#comment-16634633
 ] 

Hudson commented on HDDS-525:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15086 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15086/])
HDDS-525. Support virtual-hosted style URLs. Contributed by Bharat (bharat: rev 
4eff629ab3a330e8f1efe92857e76235cb412ef4)
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/VirtualHostStyleFilter.java
* (edit) hadoop-ozone/integration-test/pom.xml
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/S3GatewayConfigKeys.java
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestVirtualHostStyleFilter.java
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/package-info.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java


> Support virtual-hosted style URLs
> -
>
> Key: HDDS-525
> URL: https://issues.apache.org/jira/browse/HDDS-525
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.2
>
> Attachments: HDDS-525.00.patch, HDDS-525.02.patch, HDDS-525.03.patch, 
> HDDS-525.04.patch, HDDS-525.05.patch, HDDS-525.06.patch, HDDS-525.07.patch, 
> HDDS-525.08.patch
>
>
> AWS supports to kind of pattern for the base url of the s3 rest api: 
> virtual-hosted style and path-style.
> Path style: http://s3.us-east-2.amazonaws.com/bucket
> Virtual-hosted style: http://bucket.s3.us-east-2.amazonaws.com
> By default we support the path style method with the volume name in the url:
> http://s3.us-east-2.amazonaws.com/volume/bucket
> Here the endpoint url is http://s3.us-east-2.amazonaws.com/volume/ and the 
> bucket is appended.
> Some of the 3rd party s3 tools (goofys is an example) Supports only the 
> virtual style method. With goofys we can set a custom endpoint 
> (http://localhost:9878) but all the other postfixes after the port are 
> removed.
> It can be solved with using virtual-style url which also could include the 
> volume name:
> http://bucket.volume..com
> The easiest way is to support both of them is implementing a 
> ContainerRequestFilter which can parse the hostname (based on a configuration 
> value) and extend the existing url with adding the missing volume/bucket 
> part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634623#comment-16634623
 ] 

Hadoop QA commented on HDDS-558:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-ozone/ozone-manager: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942023/HDDS-558.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 168b22befe5f 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cc80ac2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1258/artifact/out/diff-checkstyle-hadoop-ozone_ozone-manager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1258/testReport/ |
| Max. process+thread count | 332 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
| Console output | 
https://builds.apache.or

[jira] [Commented] (HDFS-13947) Review of DirectoryScanner Class

2018-10-01 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634613#comment-16634613
 ] 

BELUGA BEHR commented on HDFS-13947:


[~elgoiri] !

My personal view is that with wide-screen monitors being the normal, 80 
characters is too draconian.  I would leave it as such.

However, unit tests are related.  I've got to address that; I know what's 
causing it, just need to figure out a workaround.  Thanks!

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13947) Review of DirectoryScanner Class

2018-10-01 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634613#comment-16634613
 ] 

BELUGA BEHR edited comment on HDFS-13947 at 10/1/18 8:38 PM:
-

[~elgoiri] !

My personal view is that with wide-screen monitors being the normal, 80 
characters is too draconian.  I would leave it as such.

However, unit tests failures are related.  I've got to address that; I know 
what's causing it, just need to figure out a workaround.  Thanks!


was (Author: belugabehr):
[~elgoiri] !

My personal view is that with wide-screen monitors being the normal, 80 
characters is too draconian.  I would leave it as such.

However, unit tests are related.  I've got to address that; I know what's 
causing it, just need to figure out a workaround.  Thanks!

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-525) Support virtual-hosted style URLs

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634589#comment-16634589
 ] 

Hadoop QA commented on HDDS-525:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 55s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=1

[jira] [Commented] (HDFS-13947) Review of DirectoryScanner Class

2018-10-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634580#comment-16634580
 ] 

Íñigo Goiri commented on HDFS-13947:


Thanks [~belugabehr] for the clarification, using -1 as disabled is way more 
intuitive.
Regarding the checkstyles, I'm not sure what to do.
Splitting the lines and appending kind of defeats the purpose of using logger 
{}.
Any thoughts on whether is useful to fix the checkstyle warnings?

> Review of DirectoryScanner Class
> 
>
> Key: HDFS-13947
> URL: https://issues.apache.org/jira/browse/HDFS-13947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13947.1.patch, HDFS-13947.2.patch
>
>
> Review of Directory Scanner.   Replaced a lot of code with Guava MultiMap.  
> Some general house cleaning and improved logging.  For performance, using 
> {{ArrayList}} instead of {{LinkedList}} where possible, especially since 
> these lists can be quite large a LinkedList will consume a lot of memory and 
> be slow to sort/iterate over.
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634571#comment-16634571
 ] 

Íñigo Goiri commented on HDFS-13944:


I'm not sure how to handle the checstyle, if I split the line, javadoc 
complains, if I don't, checkstyle complains.

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, javadoc-rbf-000.log, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-01 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13944:
---
Attachment: HDFS-13944.002.patch

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Akira Ajisaka
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13944.000.patch, HDFS-13944.001.patch, 
> HDFS-13944.002.patch, javadoc-rbf-000.log, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-525) Support virtual-hosted style URLs

2018-10-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634401#comment-16634401
 ] 

Bharat Viswanadham edited comment on HDDS-525 at 10/1/18 8:19 PM:
--

Thank You [~anu] for review. I have updated logs to debug in patch v08. 


was (Author: bharatviswa):
Thank You [~anu] for review. I will update info logs to debug during commit.

 

> Support virtual-hosted style URLs
> -
>
> Key: HDDS-525
> URL: https://issues.apache.org/jira/browse/HDDS-525
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.2
>
> Attachments: HDDS-525.00.patch, HDDS-525.02.patch, HDDS-525.03.patch, 
> HDDS-525.04.patch, HDDS-525.05.patch, HDDS-525.06.patch, HDDS-525.07.patch, 
> HDDS-525.08.patch
>
>
> AWS supports to kind of pattern for the base url of the s3 rest api: 
> virtual-hosted style and path-style.
> Path style: http://s3.us-east-2.amazonaws.com/bucket
> Virtual-hosted style: http://bucket.s3.us-east-2.amazonaws.com
> By default we support the path style method with the volume name in the url:
> http://s3.us-east-2.amazonaws.com/volume/bucket
> Here the endpoint url is http://s3.us-east-2.amazonaws.com/volume/ and the 
> bucket is appended.
> Some of the 3rd party s3 tools (goofys is an example) Supports only the 
> virtual style method. With goofys we can set a custom endpoint 
> (http://localhost:9878) but all the other postfixes after the port are 
> removed.
> It can be solved with using virtual-style url which also could include the 
> volume name:
> http://bucket.volume..com
> The easiest way is to support both of them is implementing a 
> ContainerRequestFilter which can parse the hostname (based on a configuration 
> value) and extend the existing url with adding the missing volume/bucket 
> part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-525) Support virtual-hosted style URLs

2018-10-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-525:

Fix Version/s: 0.2.2

> Support virtual-hosted style URLs
> -
>
> Key: HDDS-525
> URL: https://issues.apache.org/jira/browse/HDDS-525
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.2
>
> Attachments: HDDS-525.00.patch, HDDS-525.02.patch, HDDS-525.03.patch, 
> HDDS-525.04.patch, HDDS-525.05.patch, HDDS-525.06.patch, HDDS-525.07.patch, 
> HDDS-525.08.patch
>
>
> AWS supports to kind of pattern for the base url of the s3 rest api: 
> virtual-hosted style and path-style.
> Path style: http://s3.us-east-2.amazonaws.com/bucket
> Virtual-hosted style: http://bucket.s3.us-east-2.amazonaws.com
> By default we support the path style method with the volume name in the url:
> http://s3.us-east-2.amazonaws.com/volume/bucket
> Here the endpoint url is http://s3.us-east-2.amazonaws.com/volume/ and the 
> bucket is appended.
> Some of the 3rd party s3 tools (goofys is an example) Supports only the 
> virtual style method. With goofys we can set a custom endpoint 
> (http://localhost:9878) but all the other postfixes after the port are 
> removed.
> It can be solved with using virtual-style url which also could include the 
> volume name:
> http://bucket.volume..com
> The easiest way is to support both of them is implementing a 
> ContainerRequestFilter which can parse the hostname (based on a configuration 
> value) and extend the existing url with adding the missing volume/bucket 
> part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-525) Support virtual-hosted style URLs

2018-10-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-525:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You, [~elek] and [~anu] for review.

I have committed this to trunk.

> Support virtual-hosted style URLs
> -
>
> Key: HDDS-525
> URL: https://issues.apache.org/jira/browse/HDDS-525
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-525.00.patch, HDDS-525.02.patch, HDDS-525.03.patch, 
> HDDS-525.04.patch, HDDS-525.05.patch, HDDS-525.06.patch, HDDS-525.07.patch, 
> HDDS-525.08.patch
>
>
> AWS supports to kind of pattern for the base url of the s3 rest api: 
> virtual-hosted style and path-style.
> Path style: http://s3.us-east-2.amazonaws.com/bucket
> Virtual-hosted style: http://bucket.s3.us-east-2.amazonaws.com
> By default we support the path style method with the volume name in the url:
> http://s3.us-east-2.amazonaws.com/volume/bucket
> Here the endpoint url is http://s3.us-east-2.amazonaws.com/volume/ and the 
> bucket is appended.
> Some of the 3rd party s3 tools (goofys is an example) Supports only the 
> virtual style method. With goofys we can set a custom endpoint 
> (http://localhost:9878) but all the other postfixes after the port are 
> removed.
> It can be solved with using virtual-style url which also could include the 
> volume name:
> http://bucket.volume..com
> The easiest way is to support both of them is implementing a 
> ContainerRequestFilter which can parse the hostname (based on a configuration 
> value) and extend the existing url with adding the missing volume/bucket 
> part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634565#comment-16634565
 ] 

Hadoop QA commented on HDDS-560:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-560 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942016/HDDS-560.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 78905b15e6bd 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cc80ac2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1255/testReport/ |
| Max. process+thread count | 440 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1255/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create Generic exception class to be used b

[jira] [Commented] (HDFS-13052) WebHDFS: Add support for snasphot diff

2018-10-01 Thread Xun REN (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634559#comment-16634559
 ] 

Xun REN commented on HDFS-13052:


Hi [~jojochuang],

Thanks for this comment. So If I understand well, you're meaning to cancel the 
modification in HDFS-13916 and replace it by doing something like 
DistributedFileSystem#snapshotDiffReportListingRemoteIterator ?

Should we create a new Jira or keeping using  HDFS-13916 ?

 

 

> WebHDFS: Add support for snasphot diff
> --
>
> Key: HDFS-13052
> URL: https://issues.apache.org/jira/browse/HDFS-13052
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: snapshot, webhdfs
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13052.001.patch, HDFS-13052.002.patch, 
> HDFS-13052.003.patch, HDFS-13052.004.patch, HDFS-13052.005.patch, 
> HDFS-13052.006.patch, HDFS-13052.007.patch
>
>
> This Jira aims to implement snapshot diff operation for webHdfs filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-01 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-565:
--
Target Version/s: 0.2.2

> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Priority: Minor
>  Labels: newbie
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634555#comment-16634555
 ] 

Anu Engineer commented on HDDS-565:
---

nice find. Thanks for root causing this.

> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Priority: Minor
>  Labels: newbie
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-525) Support virtual-hosted style URLs

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634541#comment-16634541
 ] 

Hadoop QA commented on HDDS-525:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 28s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdds.scm.pipeline.TestNodeFailure |
|   | hadoop.ozone.

[jira] [Updated] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-558:
--
Attachment: HDDS-558.001.patch
Status: Patch Available  (was: In Progress)

> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-558.001.patch
>
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634531#comment-16634531
 ] 

Hadoop QA commented on HDDS-564:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDDS-564 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-564 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942021/HDDS-564..001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1257/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564..001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-01 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634521#comment-16634521
 ] 

Siyao Meng commented on HDFS-13877:
---

[~ljain] Thanks for the comments! Fixing right away.

[~jojochuang] This sounds like a new jira to me. Will file in a moment.

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-558) When creating keys, the creationTime and modificationTime should ideally be the same

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-558 started by Namit Maheshwari.
-
> When creating keys, the creationTime and modificationTime should ideally be 
> the same
> 
>
> Key: HDDS-558
> URL: https://issues.apache.org/jira/browse/HDDS-558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Namit Maheshwari
>Priority: Major
>  Labels: newbie
>
> Steps to replicate:
>  # Start ozone
>  # Create Volume and Bucket or use existing ones
>  # Create Key
>  # List Keys for that bucket or just get key info
> We will see that the creationTime and ModificationTime has a minor difference.
>  
> {noformat}
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key put /rvol/rbucket/rkey sample.orc
> hadoop@fdaf56d9e9d8:~$ ./bin/ozone sh key list /rvol/rbucket
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Wed, 26 Sep 2018 20:29:10 GMT",
> "modifiedOn" : "Wed, 26 Sep 2018 20:29:12 GMT",
> "size" : 2262690,
> "keyName" : "rkey"
> } ]{noformat}
> Potential fix area : KeyManagerImpl#commitKey
> {code:java}
> keyInfo = new OmKeyInfo.Builder()
> .setVolumeName(args.getVolumeName())
> .setBucketName(args.getBucketName())
> .setKeyName(args.getKeyName())
> .setOmKeyLocationInfos(Collections.singletonList(
> new OmKeyLocationInfoGroup(0, locations)))
> .setCreationTime(Time.now())
> .setModificationTime(Time.now())
> .setDataSize(size)
> .setReplicationType(type)
> .setReplicationFactor(factor)
> .build();
> {code}
> For setting, both these values, we are getting current time and thus the 
> minor difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-564:
--
Attachment: HDDS-564..001.patch
Status: Patch Available  (was: In Progress)

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564..001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-564 started by Namit Maheshwari.
-
> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-8) Add OzoneManager Delegation Token support

2018-10-01 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634515#comment-16634515
 ] 

Ajay Kumar commented on HDDS-8:
---

patch v11 to fix license and one check style issue.

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-01 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-565:
---

 Summary: TestContainerPersistence fails regularly in Jenkins
 Key: HDDS-565
 URL: https://issues.apache.org/jira/browse/HDDS-565
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Hanisha Koneru


TestContainerPersistence tests are regularly failing in Jenkins with the error 
- "\{{Unable to create directory /dfs/data}}". 
In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data dir 
location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-8) Add OzoneManager Delegation Token support

2018-10-01 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-8:
--
Attachment: HDDS-8-HDDS-4.11.patch

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari reassigned HDDS-564:
-

Assignee: Namit Maheshwari

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases

2018-10-01 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh reassigned HDFS-6255:
--

Assignee: (was: Pranay Singh)

> fuse_dfs will not adhere to ACL permissions in some cases
> -
>
> Key: HDFS-6255
> URL: https://issues.apache.org/jira/browse/HDFS-6255
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Stephen Chu
>Priority: Major
>
> As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. 
> Then I set a new acl group:jenkins:rwx on /tmp/acl_dir.
> {code}
> jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir
> # file: /tmp/acl_dir
> # owner: hdfs
> # group: supergroup
> user::rwx
> group::---
> group:jenkins:rwx
> mask::rwx
> other::---
> {code}
> Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create 
> a file and directory inside.
> {code}
> [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1
> [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1
> hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/
> Found 2 items
> drwxr-xr-x   - jenkins supergroup  0 2014-04-17 19:11 
> /tmp/acl_dir/testdir1
> -rw-r--r--   1 jenkins supergroup  0 2014-04-17 19:11 
> /tmp/acl_dir/testfile1
> [jenkins@hdfs-vanilla-1 ~]$ 
> {code}
> However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a 
> fuse_dfs mount, I get permission denied. Same permission denied when I try to 
> create or list files.
> {code}
> [jenkins@hdfs-vanilla-1 tmp]$ ls -l
> total 16
> drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir
> drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2
> drwxr-xr-x 3 mapred  nobody 4096 Mar 11 03:53 mapred
> drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli
> -rwx-- 1 hdfsnobody0 Apr  7 17:18 tf1
> [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir
> bash: cd: acl_dir: Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2
> touch: cannot touch `acl_dir/testfile2': Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2
> mkdir: cannot create directory `acl_dir/testdir2': Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ 
> {code}
> The fuse_dfs debug output doesn't show any error for the above operations:
> {code}
> unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48
>unique: 18, success, outsize: 32
> unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80
> readdir[0] from 0
>unique: 19, success, outsize: 312
> unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56
> getattr /tmp
>unique: 20, success, outsize: 120
> unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80
>unique: 21, success, outsize: 16
> unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>unique: 22, success, outsize: 16
> unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56
> getattr /tmp
>unique: 23, success, outsize: 120
> unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 24, success, outsize: 120
> unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 25, success, outsize: 120
> unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 26, success, outsize: 120
> unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 27, success, outsize: 120
> unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 28, success, outsize: 120
> {code}
> In other scenarios, ACL permissions are enforced successfully. For example, 
> as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set 
> the acl user:jenkins:--- on the directory. On the fuse mount, I am not able 
> to ls, mkdir, or touch to that directory as jenkins user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-564:
-

 Summary: Update docker-hadoop-runner branch to reflect changes 
done in HDDS-490
 Key: HDDS-564
 URL: https://issues.apache.org/jira/browse/HDDS-564
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Namit Maheshwari


starter.sh needs to be modified to reflect the changes done in HDDS-490

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-10-01 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634502#comment-16634502
 ] 

Hanisha Koneru commented on HDDS-561:
-

Thanks [~ljain].

Patch v03 LGTM. The test failures look unrelated. Can you please fix the 
checkstyle issues.

> Move Node2ContainerMap and Node2PipelineMap to NodeManager
> --
>
> Key: HDDS-561
> URL: https://issues.apache.org/jira/browse/HDDS-561
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-561.001.patch, HDDS-561.002.patch, 
> HDDS-561.003.patch
>
>
> As {{NodeManager}}/{{SCMNodeManager}} is the one who handles the lifecycle 
> and also the one who maintains the current state of a node/datanode in SCM, 
> {{Node2ContainerMap}} and {{Node2PipelineMap}} should be maintained by 
> {{NodeManager}}. Other components in SCM should query {{NodeManager}} to get 
> this info. It will make the code look cleaner and easy to maintain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-10-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634484#comment-16634484
 ] 

Bharat Viswanadham edited comment on HDDS-560 at 10/1/18 7:03 PM:
--

[~anu] , [~elek] Thanks for the review.

Uploaded the patch to throw IOException and OS3Exception, instead of Exception.

And for like unhandled exceptions in else case, not sure what error codes needs 
to be set there, so not done in this patch. But we should handle them as we 
progress, because if service exception raises, we set the cause of that service 
exception and throw as IOException. Need to look in, how we need to handle 
these scenarios.

 

This is intial patch, which provides base framework classes for Ozone 
S3Exception handling, but this has not completely handled everything.

Attached patch v01.

 


was (Author: bharatviswa):
[~anu] , [~elek] Thanks for the review.

Uploaded the patch to throw IOException and OS3Exception, instead of Exception.

And for like unhandled exceptions in else case, not sure what error codes needs 
to be set there, so not done in this patch. But we should handle them as we 
progress, because if service exception raises, we set the cause of that service 
exception and throw as IOException. Need to look in, how we need to handle 
these scenarios.

 

Attached patch v01.

 

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch, HDDS-560.01.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-01 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634487#comment-16634487
 ] 

Ajay Kumar commented on HDFS-13941:
---

Test failures looks unrelated.

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch
>
>
>  change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for Hadoop 2 clients. Since SecretManager is marked for public 
> audience we should add a overloaded function to allow Hadoop 2 clients to 
> work with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-10-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634484#comment-16634484
 ] 

Bharat Viswanadham edited comment on HDDS-560 at 10/1/18 7:01 PM:
--

[~anu] , [~elek] Thanks for the review.

Uploaded the patch to throw IOException and OS3Exception, instead of Exception.

And for like unhandled exceptions in else case, not sure what error codes needs 
to be set there, so not done in this patch. But we should handle them as we 
progress, because if service exception raises, we set the cause of that service 
exception and throw as IOException. Need to look in, how we need to handle 
these scenarios.

 

Attached patch v01.

 


was (Author: bharatviswa):
Uploaded the patch to throe IOException and OS3Exception, instead of Exception.

And for like unhandled exceptions in else case, not sure what error codes needs 
to be set there, so not done in this patch. But we should handle them as we 
progress, because if service exception raises, we set the cause of that service 
exception and throw as IOException. Need to look in, how we need to handle 
these scenarios.

 

Attached patch v01.

 

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch, HDDS-560.01.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-10-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634484#comment-16634484
 ] 

Bharat Viswanadham commented on HDDS-560:
-

Uploaded the patch to throe IOException and OS3Exception, instead of Exception.

And for like unhandled exceptions in else case, not sure what error codes needs 
to be set there, so not done in this patch. But we should handle them as we 
progress, because if service exception raises, we set the cause of that service 
exception and throw as IOException. Need to look in, how we need to handle 
these scenarios.

 

Attached patch v01.

 

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch, HDDS-560.01.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-10-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-560:

Attachment: HDDS-560.01.patch

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch, HDDS-560.01.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-557) DeadNodeHandler should handle exception from removeContainerHandler api

2018-10-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634462#comment-16634462
 ] 

Hudson commented on HDDS-557:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15084 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15084/])
HDDS-557. DeadNodeHandler should handle exception from (ajay: rev 
cc80ac23156e1e91c1f77df65fa53504fbb34141)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java


> DeadNodeHandler should handle exception from removeContainerHandler api
> ---
>
> Key: HDDS-557
> URL: https://issues.apache.org/jira/browse/HDDS-557
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-557.00.patch
>
>
> DeadNodeHandler should handle exception from removeContainerHandler api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13948) Provide Regex Based Mount Point In Inode Tree

2018-10-01 Thread zhenzhao wang (JIRA)
zhenzhao wang created HDFS-13948:


 Summary: Provide Regex Based Mount Point In Inode Tree
 Key: HDFS-13948
 URL: https://issues.apache.org/jira/browse/HDFS-13948
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: fs
Reporter: zhenzhao wang
Assignee: zhenzhao wang


This jira is created to support regex based mount point in Inode Tree. We 
noticed that mount point only support fixed target path. However, we might have 
user cases when target needs to refer some fields from source. e.g. We might 
want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we want to 
refer `cluster` and `user` field in source to construct target. It's impossible 
to archive this with current link type. Though we could set one-to-one mapping, 
the mount table would become bloated if we have thousands of users. Besides, a 
regex mapping would empower us more flexibility. So we are going to build a 
regex based mount point which target could refer groups from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-557) DeadNodeHandler should handle exception from removeContainerHandler api

2018-10-01 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-557:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~anu] thanks for review. Failed test is unrelated, passes locally. Committed 
patch to trunk.

> DeadNodeHandler should handle exception from removeContainerHandler api
> ---
>
> Key: HDDS-557
> URL: https://issues.apache.org/jira/browse/HDDS-557
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-557.00.patch
>
>
> DeadNodeHandler should handle exception from removeContainerHandler api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-521) Implement DeleteBucket REST endpoint

2018-10-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-521:
---

Assignee: Bharat Viswanadham

> Implement DeleteBucket REST endpoint
> 
>
> Key: HDDS-521
> URL: https://issues.apache.org/jira/browse/HDDS-521
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> The delete bucket will do the opposite of the create bucket call. It will 
> locate the volume via the username in the delete call.
> Reference is here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html
> This is implemented as part of HDDS-444 but we need the double check the 
> headers and add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-547) Fix secure docker and configs

2018-10-01 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-547:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~xyao] thanks for contribution. [~elek] thanks for comments. I have committed 
it to HDDS-4 branch. 

> Fix secure docker and configs
> -
>
> Key: HDDS-547
> URL: https://issues.apache.org/jira/browse/HDDS-547
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-547-HDDS-4.001.patch, HDDS-547-HDDS-4.002.patch, 
> HDDS-547-HDDS-4.003.patch, HDDS-547-HDDS-4.004.patch, 
> HDDS-547-HDDS-4.005.patch, HDDS-547-HDDS-4.006.patch, 
> HDDS-547-HDDS-4.007.patch
>
>
> This is to provide a workable secure docker after recent trunk rebase for 
> dev/test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634407#comment-16634407
 ] 

Hadoop QA commented on HDFS-13944:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 19 unchanged - 0 fixed = 21 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
6s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13944 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941993/HDFS-13944.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e1c38424da6a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f7ff8c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25176/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25176/testReport/ |
| Max. process+thread count | 979 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project

[jira] [Updated] (HDDS-525) Support virtual-hosted style URLs

2018-10-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-525:

Attachment: HDDS-525.08.patch

> Support virtual-hosted style URLs
> -
>
> Key: HDDS-525
> URL: https://issues.apache.org/jira/browse/HDDS-525
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-525.00.patch, HDDS-525.02.patch, HDDS-525.03.patch, 
> HDDS-525.04.patch, HDDS-525.05.patch, HDDS-525.06.patch, HDDS-525.07.patch, 
> HDDS-525.08.patch
>
>
> AWS supports to kind of pattern for the base url of the s3 rest api: 
> virtual-hosted style and path-style.
> Path style: http://s3.us-east-2.amazonaws.com/bucket
> Virtual-hosted style: http://bucket.s3.us-east-2.amazonaws.com
> By default we support the path style method with the volume name in the url:
> http://s3.us-east-2.amazonaws.com/volume/bucket
> Here the endpoint url is http://s3.us-east-2.amazonaws.com/volume/ and the 
> bucket is appended.
> Some of the 3rd party s3 tools (goofys is an example) Supports only the 
> virtual style method. With goofys we can set a custom endpoint 
> (http://localhost:9878) but all the other postfixes after the port are 
> removed.
> It can be solved with using virtual-style url which also could include the 
> volume name:
> http://bucket.volume..com
> The easiest way is to support both of them is implementing a 
> ContainerRequestFilter which can parse the hostname (based on a configuration 
> value) and extend the existing url with adding the missing volume/bucket 
> part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-10-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634402#comment-16634402
 ] 

Anu Engineer commented on HDDS-560:
---

[~bharatviswa], [~elek] I  think we have an issue with how we handle and 
propagate exceptions especially from the server to ozone client. This is not an 
issue of S3. Let us file a Jira to track how we handle exceptions from Ozone 
Manager and I will take a look at that later. Sorry for randomizing this Jira :(

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-525) Support virtual-hosted style URLs

2018-10-01 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16634401#comment-16634401
 ] 

Bharat Viswanadham commented on HDDS-525:
-

Thank You [~anu] for review. I will update info logs to debug during commit.

 

> Support virtual-hosted style URLs
> -
>
> Key: HDDS-525
> URL: https://issues.apache.org/jira/browse/HDDS-525
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-525.00.patch, HDDS-525.02.patch, HDDS-525.03.patch, 
> HDDS-525.04.patch, HDDS-525.05.patch, HDDS-525.06.patch, HDDS-525.07.patch
>
>
> AWS supports to kind of pattern for the base url of the s3 rest api: 
> virtual-hosted style and path-style.
> Path style: http://s3.us-east-2.amazonaws.com/bucket
> Virtual-hosted style: http://bucket.s3.us-east-2.amazonaws.com
> By default we support the path style method with the volume name in the url:
> http://s3.us-east-2.amazonaws.com/volume/bucket
> Here the endpoint url is http://s3.us-east-2.amazonaws.com/volume/ and the 
> bucket is appended.
> Some of the 3rd party s3 tools (goofys is an example) Supports only the 
> virtual style method. With goofys we can set a custom endpoint 
> (http://localhost:9878) but all the other postfixes after the port are 
> removed.
> It can be solved with using virtual-style url which also could include the 
> volume name:
> http://bucket.volume..com
> The easiest way is to support both of them is implementing a 
> ContainerRequestFilter which can parse the hostname (based on a configuration 
> value) and extend the existing url with adding the missing volume/bucket 
> part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >