[jira] [Commented] (HDFS-13961) TestObserverNode refactoring

2018-10-04 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639336#comment-16639336
 ] 

Konstantin Shvachko commented on HDFS-13961:


Patch 002
- javac warnings are all about deprecations, which are not related to the patch
- Fixed checkstyle warnings, mostly JavaDoc
- Tests TestBlockReaderLocal and TestLeaseRecovery2 are flaky on trunk as well 
here

> TestObserverNode refactoring
> 
>
> Key: HDFS-13961
> URL: https://issues.apache.org/jira/browse/HDFS-13961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13961-HDFS-12943.001.patch, 
> HDFS-13961-HDFS-12943.002.patch
>
>
> TestObserverNode combines unit tests for ObserverNode. The tests are of 
> different types. I propose to split them into separate modules, factor out 
> common methods, and optimize it so that it starts and shuts down 
> MIniHDFSCluster once for the entire test rather than for individual test 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-103) SCM CA: Add new security protocol for SCM to expose security related functions

2018-10-04 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-103:

Summary: SCM CA: Add new security protocol for SCM to expose security 
related functions  (was: SCM CA: StorageContainerDatanodeProtocol for CSR and 
Certificate)

> SCM CA: Add new security protocol for SCM to expose security related functions
> --
>
> Key: HDDS-103
> URL: https://issues.apache.org/jira/browse/HDDS-103
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-578) om-audit-log4j2.properties must be packaged in ozone-dist

2018-10-04 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-578:
---
Description: 
After HDDS-447, it appears the om-audit-log4j2.properties file is not available 
ozone tar ball.

On decompressing the ozone tar, we must see this file in 
ozone-/etc/hadoop directory.

This Jira aims to fix this so that audit logging configurations are available 
and logs are generated.

  was:
After HDDS-447, it appears the om-audit-log4j2.properties file is not available 
in etc/hadoop.

This Jira aims to fix this so that audit logging configurations are available 
and logs are generated.


> om-audit-log4j2.properties must be packaged in ozone-dist 
> --
>
> Key: HDDS-578
> URL: https://issues.apache.org/jira/browse/HDDS-578
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: regression
> Attachments: HDDS-578.001.patch
>
>
> After HDDS-447, it appears the om-audit-log4j2.properties file is not 
> available ozone tar ball.
> On decompressing the ozone tar, we must see this file in 
> ozone-/etc/hadoop directory.
> This Jira aims to fix this so that audit logging configurations are available 
> and logs are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-578) om-audit-log4j2.properties must be packaged in ozone-dist

2018-10-04 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-578:
---
Labels: regression  (was: regresion)

> om-audit-log4j2.properties must be packaged in ozone-dist 
> --
>
> Key: HDDS-578
> URL: https://issues.apache.org/jira/browse/HDDS-578
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: regression
> Attachments: HDDS-578.001.patch
>
>
> After HDDS-447, it appears the om-audit-log4j2.properties file is not 
> available in etc/hadoop.
> This Jira aims to fix this so that audit logging configurations are available 
> and logs are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-578) om-audit-log4j2.properties must be packaged in ozone-dist

2018-10-04 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-578:
---
Attachment: HDDS-578.001.patch
Status: Patch Available  (was: Open)

[~elek] / [~anu] - I found this while testing another jira. I have verified the 
patch manually. Please review. Thanks!

> om-audit-log4j2.properties must be packaged in ozone-dist 
> --
>
> Key: HDDS-578
> URL: https://issues.apache.org/jira/browse/HDDS-578
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-578.001.patch
>
>
> After HDDS-447, it appears the om-audit-log4j2.properties file is not 
> available in etc/hadoop.
> This Jira aims to fix this so that audit logging configurations are available 
> and logs are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-578) om-audit-log4j2.properties must be packaged in ozone-dist

2018-10-04 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-578:
---
Labels: regresion  (was: )

> om-audit-log4j2.properties must be packaged in ozone-dist 
> --
>
> Key: HDDS-578
> URL: https://issues.apache.org/jira/browse/HDDS-578
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: regresion
> Attachments: HDDS-578.001.patch
>
>
> After HDDS-447, it appears the om-audit-log4j2.properties file is not 
> available in etc/hadoop.
> This Jira aims to fix this so that audit logging configurations are available 
> and logs are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-578) om-audit-log4j2.properties must be packaged in ozone-dist

2018-10-04 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-578:
--

 Summary: om-audit-log4j2.properties must be packaged in ozone-dist 
 Key: HDDS-578
 URL: https://issues.apache.org/jira/browse/HDDS-578
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


After HDDS-447, it appears the om-audit-log4j2.properties file is not available 
in etc/hadoop.

This Jira aims to fix this so that audit logging configurations are available 
and logs are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13961) TestObserverNode refactoring

2018-10-04 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13961:
---
Attachment: HDFS-13961-HDFS-12943.002.patch

> TestObserverNode refactoring
> 
>
> Key: HDFS-13961
> URL: https://issues.apache.org/jira/browse/HDFS-13961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13961-HDFS-12943.001.patch, 
> HDFS-13961-HDFS-12943.002.patch
>
>
> TestObserverNode combines unit tests for ObserverNode. The tests are of 
> different types. I propose to split them into separate modules, factor out 
> common methods, and optimize it so that it starts and shuts down 
> MIniHDFSCluster once for the entire test rather than for individual test 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-568) Ozone sh unable to delete volume

2018-10-04 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-568:
--

Assignee: Dinesh Chitlangia

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-575) SCMContainerManager#loadExistingContainers should not pass this during initialization

2018-10-04 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-575:
---
Status: Patch Available  (was: Open)

> SCMContainerManager#loadExistingContainers should not pass this during 
> initialization
> -
>
> Key: HDDS-575
> URL: https://issues.apache.org/jira/browse/HDDS-575
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-575.001.patch
>
>
> SCMContainerManager passes the this pointer during initialization of 
> ContainerStateManager.
> This jira proposes to remove this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-479) Add more ozone fs tests in the robot integration framework

2018-10-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639323#comment-16639323
 ] 

Hudson commented on HDDS-479:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15122 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15122/])
HDDS-479. Add more ozone fs tests in the robot integration framework. 
(aengineer: rev 153941b2365a3f4a2fc1285f93eeaf12419aca3a)
* (edit) hadoop-ozone/dist/src/main/smoketest/ozonefs/ozonefs.robot


> Add more ozone fs tests in the robot integration framework
> --
>
> Key: HDDS-479
> URL: https://issues.apache.org/jira/browse/HDDS-479
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Minor
>  Labels: alpha2
> Fix For: 0.3.0
>
> Attachments: HDDS-479.001.patch
>
>
> Currently , we have few number of ozone fs tests in robot integration 
> framework.
> Need to add more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13950) ACL documentation update to indicate that ACL entries are capped by 32

2018-10-04 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639318#comment-16639318
 ] 

Adam Antal commented on HDFS-13950:
---

Thanks [~jojochuang]!

> ACL documentation update to indicate that ACL entries are capped by 32
> --
>
> Key: HDFS-13950
> URL: https://issues.apache.org/jira/browse/HDFS-13950
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13950.001.patch
>
>
> The hadoop documentation does not contain the information that the ACL 
> entries of a file or dir are capped by 32. My proposal is to add a single 
> line to the md file informing the users about this.
> Remark: this is indeed the maximum as (from AclTransformation.java)
> {code:java}
> private static final int MAX_ENTRIES = 32;{code}
> is set as such.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-568) Ozone sh unable to delete volume

2018-10-04 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639295#comment-16639295
 ] 

Dinesh Chitlangia edited comment on HDDS-568 at 10/5/18 6:07 AM:
-

[~arpitagarwal] From my initial tests, it appears that if the volume does not 
begin with a / then the info and delete commands fail to find that volume, 
while create succeeds.

Looking at the auditLogs, it appears that when volume does not begin with /, 
the first letter of volume is getting truncated.
{code:java}
01:59:03.138 [IPC Server handler 4 on 9874] ERROR OMAudit - user=dchitlangia | 
ip=127.0.0.1 | op=DELETE_VOLUME {volume=stestvol} | ret=FAILURE
org.apache.hadoop.ozone.om.exceptions.OMException: null
at 
org.apache.hadoop.ozone.om.VolumeManagerImpl.getVolumeInfo(VolumeManagerImpl.java:304)
 ~[hadoop-ozone-ozone-manager-0.3.0-SNAPSHOT.jar:?]

{code}
I will investigate this further.


was (Author: dineshchitlangia):
[~arpitagarwal] From my initial tests, it appears that if the volume does not 
begin with a / then the info and delete commands fail to find that volume, 
while create succeeds.

I will investigate this further.

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Blocker
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-354:

Fix Version/s: 0.3.0

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-354.001.patch
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-479) Add more ozone fs tests in the robot integration framework

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-479:
--
   Resolution: Fixed
Fix Version/s: 0.3.0
   Status: Resolved  (was: Patch Available)

[~msingh], [~shashikant] Thanks for the comments. [~nilotpalnandi] Thanks for 
the contribution. I have committed this patch to the trunk.

 

> Add more ozone fs tests in the robot integration framework
> --
>
> Key: HDDS-479
> URL: https://issues.apache.org/jira/browse/HDDS-479
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Minor
>  Labels: alpha2
> Fix For: 0.3.0
>
> Attachments: HDDS-479.001.patch
>
>
> Currently , we have few number of ozone fs tests in robot integration 
> framework.
> Need to add more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-354:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~hanishakoneru] for contribution and [~nandakumar131] for review and 
root causing the issue.

I have committed this to trunk.

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-354.001.patch
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-568) Ozone sh unable to delete volume

2018-10-04 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639295#comment-16639295
 ] 

Dinesh Chitlangia edited comment on HDDS-568 at 10/5/18 5:49 AM:
-

[~arpitagarwal] From my initial tests, it appears that if the volume does not 
begin with a / then the info and delete commands fail to find that volume, 
while create succeeds.

I will investigate this further.


was (Author: dineshchitlangia):
[~arpitagarwal] From my initial tests, it appears the if the volume does not 
begin with a / then the info and delete commands fail to find that volume.

I will investigate this further.

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Blocker
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-571) Update SCM chill mode exit criteria to optionally wait for n datanodes

2018-10-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639289#comment-16639289
 ] 

Anu Engineer commented on HDDS-571:
---

+1, the patch looks amazing since there is a rule engine inside the Chill Mode 
Manager.  you might want to hold off on committing since [~arpitagarwal] might 
want to take a look. Please give him a day and feel free to commit. Thanks for 
getting this addressed so quickly.

cc" [~elek], [~dchitlangia]

> Update SCM chill mode exit criteria to optionally wait for n datanodes
> --
>
> Key: HDDS-571
> URL: https://issues.apache.org/jira/browse/HDDS-571
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-571.00.patch
>
>
> As suggested by [~arpitagarwal], [~anu] in [HDDS-512], this jira is to update 
> SCM chill mode exit criteria to optionally wait for n datanodes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-568) Ozone sh unable to delete volume

2018-10-04 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639295#comment-16639295
 ] 

Dinesh Chitlangia commented on HDDS-568:


[~arpitagarwal] From my initial tests, it appears the if the volume does not 
begin with a / then the info and delete commands fail to find that volume.

I will investigate this further.

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Blocker
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-572) Support S3 buckets as first class objects in Ozone Manager - 1

2018-10-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639267#comment-16639267
 ] 

Hudson commented on HDDS-572:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15121 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15121/])
HDDS-572. Support S3 buckets as first class objects in Ozone Manager - 
(aengineer: rev e6b77ad65f923858fb67f5c2165fefe52d6f8c17)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerLock.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerLock.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManager.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestS3BucketManager.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java


> Support S3 buckets as first class objects in Ozone Manager - 1
> --
>
> Key: HDDS-572
> URL: https://issues.apache.org/jira/browse/HDDS-572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-572.001.patch, HDDS-572.002.patch
>
>
> This Jira proposes to add support for S3 buckets as first class objects in 
> Ozone Manager. Currently we take the Ozone volume via the endPoint URL in AWS 
> sdk. With this(and the next 2 patchs), we can move away from using ozone 
> volume in the URL.
> cc: [~elek], [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-565:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~dineshchitlangia] for the fix and [~hanishakoneru] for reporting 
and root causing the issue. 
I have committed it to trunk.

> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-565.001.patch
>
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-479) Add more ozone fs tests in the robot integration framework

2018-10-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639291#comment-16639291
 ] 

Anu Engineer commented on HDDS-479:
---

I will commit this shortly.

> Add more ozone fs tests in the robot integration framework
> --
>
> Key: HDDS-479
> URL: https://issues.apache.org/jira/browse/HDDS-479
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Minor
>  Labels: alpha2
> Attachments: HDDS-479.001.patch
>
>
> Currently , we have few number of ozone fs tests in robot integration 
> framework.
> Need to add more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639268#comment-16639268
 ] 

Hudson commented on HDDS-354:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15121 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15121/])
HDDS-354. VolumeInfo.getScmUsed throws NPE. Contributed by Hanisha (bharat: rev 
2a07617f852ceddcf6b38ddcefd912fd953823d9)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/HddsVolumeUtil.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java


> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-354.001.patch
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-572) Support S3 buckets as first class objects in Ozone Manager - 1

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-572:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~bharatviswa], [~elek] Thanks for the reviews and comments. I have committed 
this to the trunk.

> Support S3 buckets as first class objects in Ozone Manager - 1
> --
>
> Key: HDDS-572
> URL: https://issues.apache.org/jira/browse/HDDS-572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-572.001.patch, HDDS-572.002.patch
>
>
> This Jira proposes to add support for S3 buckets as first class objects in 
> Ozone Manager. Currently we take the Ozone volume via the endPoint URL in AWS 
> sdk. With this(and the next 2 patchs), we can move away from using ozone 
> volume in the URL.
> cc: [~elek], [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-577:
--
Attachment: HDDS-577.001.patch

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-577.001.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-577:
--
Status: Patch Available  (was: Open)

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-577.001.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639255#comment-16639255
 ] 

Hudson commented on HDDS-565:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15120 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15120/])
HDDS-565. TestContainerPersistence fails regularly in Jenkins. (bharat: rev 
7fb91b8a534eaeda55c2de120a7d23f9c90d265b)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java


> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-565.001.patch
>
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-577:
--
Environment: (was: This patch is a continuation of HDDS-572. The 
earlier patch created S3 API support for Ozone Manager, this patch exposes that 
API to the RPC client. In the next few patches we will add support for 
S3Gateway and MiniOzone based testing.)

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-577:
--
Description: This patch is a continuation of HDDS-572. The earlier patch 
created S3 API support for Ozone Manager, this patch exposes that API to the 
RPC client. In the next few patches we will add support for S3Gateway and 
MiniOzone based testing.

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-04 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-577:
-

 Summary: Support S3 buckets as first class objects in Ozone 
Manager - 2
 Key: HDDS-577
 URL: https://issues.apache.org/jira/browse/HDDS-577
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: S3
 Environment: This patch is a continuation of HDDS-572. The earlier 
patch created S3 API support for Ozone Manager, this patch exposes that API to 
the RPC client. In the next few patches we will add support for S3Gateway and 
MiniOzone based testing.
Reporter: Anu Engineer
Assignee: Anu Engineer






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-576) Move ContainerWithPipeline creation to RPC endpoint

2018-10-04 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-576:
--

 Summary: Move ContainerWithPipeline creation to RPC endpoint
 Key: HDDS-576
 URL: https://issues.apache.org/jira/browse/HDDS-576
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Mukul Kumar Singh


with independent Pipeline and Container Managers in SCM, the creation of 
ContainerWithPipeline can be moved to RPC endpoint. This will ensure clear 
separation of the pipeline Manager and Container Manager



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-10-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639246#comment-16639246
 ] 

Bharat Viswanadham commented on HDDS-354:
-

+1 LGTM.

For find bug, there is already an open Jira for this issue HDDS-544

I will commit this shortly.

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-354.001.patch
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-575) SCMContainerManager#loadExistingContainers should not pass this during initialization

2018-10-04 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-575:
--

 Summary: SCMContainerManager#loadExistingContainers should not 
pass this during initialization
 Key: HDDS-575
 URL: https://issues.apache.org/jira/browse/HDDS-575
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


SCMContainerManager passes the this pointer during initialization of 
ContainerStateManager.
This jira proposes to remove this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-565:

Fix Version/s: 0.3.0

> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Fix For: 0.3.0
>
> Attachments: HDDS-565.001.patch
>
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639238#comment-16639238
 ] 

Bharat Viswanadham commented on HDDS-565:
-

+1 LGTM.

I will commit this shortly.

> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-565.001.patch
>
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-572) Support S3 buckets as first class objects in Ozone Manager - 1

2018-10-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639237#comment-16639237
 ] 

Bharat Viswanadham commented on HDDS-572:
-

Thank You [~anu] for fix.

+1 LGTM, I think the checkstyle issue (Adding a comment to test case) can be 
taken care during commit.

> Support S3 buckets as first class objects in Ozone Manager - 1
> --
>
> Key: HDDS-572
> URL: https://issues.apache.org/jira/browse/HDDS-572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-572.001.patch, HDDS-572.002.patch
>
>
> This Jira proposes to add support for S3 buckets as first class objects in 
> Ozone Manager. Currently we take the Ozone volume via the endPoint URL in AWS 
> sdk. With this(and the next 2 patchs), we can move away from using ozone 
> volume in the URL.
> cc: [~elek], [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-569) Add proto changes required for CopyKey to support s3 put object -copy

2018-10-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639226#comment-16639226
 ] 

Bharat Viswanadham edited comment on HDDS-569 at 10/5/18 3:48 AM:
--

Hi [~ajayydv] ,

I have not understood your comment. Do you mean that 

optional string destVolumeName = 6; should not be optional?


was (Author: bharatviswa):
Hi Ajay,

I have not understood your comment. Do you mean that 

optional string destVolumeName = 6; should not be optional?

> Add proto changes required for CopyKey to support s3 put object -copy
> -
>
> Key: HDDS-569
> URL: https://issues.apache.org/jira/browse/HDDS-569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-569.00.patch, HDDS-569.01.patch
>
>
> This Jira is the starter Jira to make changes required for copy key request 
> in S3 to support copy key across the bucket. In ozone world, this is just a 
> metadata change. This Jira is created to just change .proto file for copy key 
> request support.
>  
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-569) Add proto changes required for CopyKey to support s3 put object -copy

2018-10-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639226#comment-16639226
 ] 

Bharat Viswanadham commented on HDDS-569:
-

Hi Ajay,

I have not understood your comment. Do you mean that 

optional string destVolumeName = 6; should not be optional?

> Add proto changes required for CopyKey to support s3 put object -copy
> -
>
> Key: HDDS-569
> URL: https://issues.apache.org/jira/browse/HDDS-569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-569.00.patch, HDDS-569.01.patch
>
>
> This Jira is the starter Jira to make changes required for copy key request 
> in S3 to support copy key across the bucket. In ozone world, this is just a 
> metadata change. This Jira is created to just change .proto file for copy key 
> request support.
>  
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-04 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639210#comment-16639210
 ] 

Dinesh Chitlangia commented on HDDS-565:


[~hanishakoneru] - Failures unrelated to test. Also, verified, these tests are 
not failing in local. The concerned test TestContainerPersistence appears to 
have passed in this Jenkins run. 

> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-565.001.patch
>
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13958) Miscellaneous Improvements for FsVolumeSpi

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639184#comment-16639184
 ] 

Hadoop QA commented on HDFS-13958:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 78 unchanged - 12 fixed = 78 total (was 90) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13958 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942482/HDFS-13958.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 93d7a8137e9c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cc2babc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25206/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/

[jira] [Updated] (HDDS-10) Add kdc docker image for secure ozone cluster

2018-10-04 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-10:
---
Fix Version/s: 0.4.0

> Add kdc docker image for secure ozone cluster
> -
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch, HDDS-10-HDDS-4.03.patch, HDDS-10-HDDS-4.05.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639172#comment-16639172
 ] 

Hudson commented on HDFS-13957:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15119 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15119/])
HDFS-13957. Fix incorrect option used in description of (yqlin: rev 
619e490333fa89601fd476dedac6d16610e9a52a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md


> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13957:
-
Fix Version/s: 3.1.2
   3.2.0

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639169#comment-16639169
 ] 

Yiqun Lin commented on HDFS-13957:
--

{quote}
Can you also please commit to branch-3.1?
{quote}
Done. Cherry-pick to the branch-3.2 as well.

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13957:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~virajith] for the review.

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639167#comment-16639167
 ] 

Virajith Jalaparti commented on HDFS-13957:
---

[~linyiqun] thanks! Can you also please commit to branch-3.1? 

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639158#comment-16639158
 ] 

Yiqun Lin commented on HDFS-13957:
--

Jenkins report looks good, Committing this shortly.

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639152#comment-16639152
 ] 

Hadoop QA commented on HDFS-13957:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13957 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942486/HDFS-13957.001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 277bf6b6a5ef 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cc2babc |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25207/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13961) TestObserverNode refactoring

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639136#comment-16639136
 ] 

Hadoop QA commented on HDFS-13961:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
30s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 10 new + 407 unchanged 
- 10 fixed = 417 total (was 417) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 258 unchanged - 4 fixed = 265 total (was 262) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13961 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942461/HDFS-13961-HDFS-12943.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d21ff658171c 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 967aab6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| find

[jira] [Commented] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639142#comment-16639142
 ] 

Íñigo Goiri commented on HDFS-11396:


Yes, it fails all the time for me on Windows with trunk.
I tried to debug it a little and the bytes were always 0, no matter how long it 
waited.
I'm not very familiar with this code so I couldn't make much progress.

> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Priority: Minor
> Attachments: patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639120#comment-16639120
 ] 

Hadoop QA commented on HDFS-13926:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 30s{color} 
| {color:red} hadoop-hdfs-project generated 2 new + 467 unchanged - 0 fixed = 
469 total (was 467) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  5s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 
82 unchanged - 2 fixed = 87 total (was 84) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13926 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940266/HDFS-13926.01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 31c5a2c4c733 3.13.0-153-generic #203-Ubuntu 

[jira] [Commented] (HDFS-11396) TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out

2018-10-04 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639113#comment-16639113
 ] 

Yiqun Lin commented on HDFS-11396:
--

[~elgoiri], can you reproduce this failure in your local? In HDFS-10499, i used 
{{GenericTestUtils#waitFor}} to wait for block report to NN. I mean when mini 
cluster is busy, there is a chance the block not fully reported to NN and lead 
the failure then.

> TestNameNodeMetadataConsistency#testGenerationStampInFuture timed out
> -
>
> Key: HDFS-11396
> URL: https://issues.apache.org/jira/browse/HDFS-11396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Priority: Minor
> Attachments: patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/18334/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639098#comment-16639098
 ] 

Hadoop QA commented on HDFS-13878:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 442 unchanged - 0 fixed = 443 total (was 442) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
0s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13878 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942480/HDFS-13878.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6a9c70b9745f 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cc2babc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25205/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25205/testReport/ |
| Max. process+thread count | 652 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25205/console |
| 

[jira] [Commented] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639099#comment-16639099
 ] 

Virajith Jalaparti commented on HDFS-13957:
---

Thanks [~linyiqun]. +1 on  [^HDFS-13957.001.patch] 

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13957:
-
Assignee: Yiqun Lin
  Status: Patch Available  (was: Open)

Attach the patch. Please have a review...

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13957:
-
Attachment: HDFS-13957.001.patch

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13957.001.patch
>
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13957) Fix incorrect option used in description of InMemoryAliasMap

2018-10-04 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639087#comment-16639087
 ] 

Yiqun Lin commented on HDFS-13957:
--

{quote}
Do you plan to post a patch for this? if not, I can do it later in the day
{quote}
[~virajith], I can make a quick fix, :). Attach the patch soon.

> Fix incorrect option used in description of InMemoryAliasMap 
> -
>
> Key: HDFS-13957
> URL: https://issues.apache.org/jira/browse/HDFS-13957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Priority: Minor
>
> The incorrect option was used in description of InMemoryAliasMap.
> {noformat}
> This is a LevelDB-based alias map that runs as a separate server in Namenode. 
> The alias map itself can be created using the fs2img tool using the option 
> -Ddfs.provided.aliasmap.leveldb.path=file:///path/to/leveldb/map/dingos.db -o 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  as in the example above.
> {noformat}
> Here -o should be -b, -o is specified for output directory of generated 
> fsimage. -b is for block output class. The right usage showed in the doc:
> {noformat}
> hadoop org.apache.hadoop.hdfs.server.namenode.FileSystemImage \
>   -Ddfs.provided.aliasmap.leveldb.path=/path/to/leveldb/map/dingos.db \
>   -b 
> org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap
>  \
>   -o file:///tmp/name \
>   -u CustomResolver \
>   hdfs://enfield/projects/ywqmd/incandenza
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-572) Support S3 buckets as first class objects in Ozone Manager - 1

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639072#comment-16639072
 ] 

Hadoop QA commented on HDDS-572:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 59s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 34s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-572 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942462/HDDS-572.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cb1bc6c27113 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cc2babc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.

[jira] [Updated] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-04 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13878:
--
Attachment: HDFS-13878.002.patch
Status: Patch Available  (was: In Progress)

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch, HDFS-13878.002.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639055#comment-16639055
 ] 

Hadoop QA commented on HDDS-565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 43s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
|   | hadoop.ozone.TestMiniOzoneCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942471/HDDS-565.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3bbd6f01522b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cc2babc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1281/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
|  Test

[jira] [Updated] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-04 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13878:
--
Status: In Progress  (was: Patch Available)

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13958) Miscellaneous Improvements for FsVolumeSpi

2018-10-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13958:
---
Attachment: HDFS-13958.2.patch

> Miscellaneous Improvements for FsVolumeSpi
> --
>
> Key: HDFS-13958
> URL: https://issues.apache.org/jira/browse/HDFS-13958
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13958.1.patch, HDFS-13958.2.patch
>
>
> The work in [HDFS-13947] allowed for using {{ArrayList}} instead of 
> {{LinkedList}} when scanning DataNode local directories, however the 
> {{FsVolumeSpi}} implementations were still using (and forcing) 
> {{LinkedList}}.  I propose changing the {{FsVolumeSpi}} signatures to allow 
> for {{Collection}} instead of {{LinkedList}}.  Since I'm looking at the code, 
> I made some small improvements and check-style fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13958) Miscellaneous Improvements for FsVolumeSpi

2018-10-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13958:
---
Status: Patch Available  (was: Open)

Added new patch to address check styles.  Thanks for pointing it out.

> Miscellaneous Improvements for FsVolumeSpi
> --
>
> Key: HDFS-13958
> URL: https://issues.apache.org/jira/browse/HDFS-13958
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13958.1.patch, HDFS-13958.2.patch
>
>
> The work in [HDFS-13947] allowed for using {{ArrayList}} instead of 
> {{LinkedList}} when scanning DataNode local directories, however the 
> {{FsVolumeSpi}} implementations were still using (and forcing) 
> {{LinkedList}}.  I propose changing the {{FsVolumeSpi}} signatures to allow 
> for {{Collection}} instead of {{LinkedList}}.  Since I'm looking at the code, 
> I made some small improvements and check-style fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13958) Miscellaneous Improvements for FsVolumeSpi

2018-10-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13958:
---
Status: Open  (was: Patch Available)

> Miscellaneous Improvements for FsVolumeSpi
> --
>
> Key: HDFS-13958
> URL: https://issues.apache.org/jira/browse/HDFS-13958
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-13958.1.patch, HDFS-13958.2.patch
>
>
> The work in [HDFS-13947] allowed for using {{ArrayList}} instead of 
> {{LinkedList}} when scanning DataNode local directories, however the 
> {{FsVolumeSpi}} implementations were still using (and forcing) 
> {{LinkedList}}.  I propose changing the {{FsVolumeSpi}} signatures to allow 
> for {{Collection}} instead of {{LinkedList}}.  Since I'm looking at the code, 
> I made some small improvements and check-style fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13961) TestObserverNode refactoring

2018-10-04 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638960#comment-16638960
 ] 

Konstantin Shvachko edited comment on HDFS-13961 at 10/4/18 11:45 PM:
--

* I split {{TestObserverNode}} into 3 modules:
*# Main functionality {{TestObserverNode}}
*# Multiple Observers {{TestMultiObserverNode}}
*# Consistency of reads from Observers {{TestConsistentReadsObserver}}
* Made sure that the mini-cluster is spawn off only once per each module
* Moved common methods into {{HATestUtil}}  and {{MiniDFSCluster}}
* Corrected implementation in several places
* Also enabled testMsyncSimple(), which was ignored, waiting for HDFS-13880

[~vagarychen], [~csun] please take a look if you can.



was (Author: shv):
* I split {{TestObserverNode}} into 3 modules:
*# Main functionality {{TestObserverNode}}
*# Multiple Observers {{TestMultiObserverNode}}
*# Consistency of reads from Observers {{TestConsistentReadsObserver}}
* Made sure that the mini-cluster is spawn off only once per each module
* Moved common methods into {{HATestUtil}}  and {{MiniDFSCluster}}
* Corrected implementation in several places

[~vagarychen], [~csun] please take a look if you can.


> TestObserverNode refactoring
> 
>
> Key: HDFS-13961
> URL: https://issues.apache.org/jira/browse/HDFS-13961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13961-HDFS-12943.001.patch
>
>
> TestObserverNode combines unit tests for ObserverNode. The tests are of 
> different types. I propose to split them into separate modules, factor out 
> common methods, and optimize it so that it starts and shuts down 
> MIniHDFSCluster once for the entire test rather than for individual test 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13956) iNotify should include information to identify a file as either replicated or erasure coded

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639001#comment-16639001
 ] 

Hadoop QA commented on HDFS-13956:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
142 unchanged - 0 fixed = 143 total (was 142) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13956 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942442/HDFS-13956-003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux b81a8606e13b 3.13.0-153

[jira] [Commented] (HDFS-13956) iNotify should include information to identify a file as either replicated or erasure coded

2018-10-04 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639020#comment-16639020
 ] 

Wei-Chiu Chuang commented on HDFS-13956:


On an unrelated note, it looks like we should keep the fields in 
FSEditLog.AddCloseOp in sync with CreateEvent. That means CreateEvent should 
also have storagePolicyId field as well. (and xAttrs and aclEntries for that 
matter)

> iNotify should include information to identify a file as either replicated or 
> erasure coded
> ---
>
> Key: HDFS-13956
> URL: https://issues.apache.org/jira/browse/HDFS-13956
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13956-001.patch, HDFS-13956-002.patch, 
> HDFS-13956-003.patch
>
>
> Currently iNotify does not provide information to identify if a given file is 
> using replication or erasure coding mode. This would be very useful for the 
> downstream applications using iNotify functionality (e.g. to tag/search files 
> using erasure coding).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13950) ACL documentation update to indicate that ACL entries are capped by 32

2018-10-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16639008#comment-16639008
 ] 

Hudson commented on HDFS-13950:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15118 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15118/])
HDFS-13950. ACL documentation update to indicate that ACL entries are (weichiu: 
rev cc2babc1f75c93bf89a8f10da525f944c15d02ea)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md


> ACL documentation update to indicate that ACL entries are capped by 32
> --
>
> Key: HDFS-13950
> URL: https://issues.apache.org/jira/browse/HDFS-13950
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13950.001.patch
>
>
> The hadoop documentation does not contain the information that the ACL 
> entries of a file or dir are capped by 32. My proposal is to add a single 
> line to the md file informing the users about this.
> Remark: this is indeed the maximum as (from AclTransformation.java)
> {code:java}
> private static final int MAX_ENTRIES = 32;{code}
> is set as such.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-04 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-565:
---
Attachment: HDDS-565.001.patch
Status: Patch Available  (was: Open)

[~hanishakoneru] - Attached patch 001 for your review

> TestContainerPersistence fails regularly in Jenkins
> ---
>
> Key: HDDS-565
> URL: https://issues.apache.org/jira/browse/HDDS-565
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-565.001.patch
>
>
> TestContainerPersistence tests are regularly failing in Jenkins with the 
> error - "\{{Unable to create directory /dfs/data}}". 
> In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
> But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data 
> dir location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638989#comment-16638989
 ] 

Hudson commented on HDFS-13877:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15117 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15117/])
HDFS-13877. HttpFS: Implement GETSNAPSHOTDIFF. Contributed by Siyao (weichiu: 
rev 396ce0d9f470a5e8af03987ad6396d0f08b3d225)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java


> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch, HDFS-13877.004.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-564:
--
Target Version/s: 0.3.0

> Update docker-hadoop-runner branch to reflect changes done in HDDS-490
> --
>
> Key: HDDS-564
> URL: https://issues.apache.org/jira/browse/HDDS-564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-564-docker-hadoop-runner.001.patch
>
>
> starter.sh needs to be modified to reflect the changes done in HDDS-490
> For compatibility, starter.sh should support both the old and new style 
> options.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-571) Update SCM chill mode exit criteria to optionally wait for n datanodes

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-571:
--
Target Version/s: 0.3.0

> Update SCM chill mode exit criteria to optionally wait for n datanodes
> --
>
> Key: HDDS-571
> URL: https://issues.apache.org/jira/browse/HDDS-571
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-571.00.patch
>
>
> As suggested by [~arpitagarwal], [~anu] in [HDDS-512], this jira is to update 
> SCM chill mode exit criteria to optionally wait for n datanodes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-284) CRC for ChunksData

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-284:
--
Target Version/s: 0.4.0
   Fix Version/s: (was: 0.3.0)

> CRC for ChunksData
> --
>
> Key: HDDS-284
> URL: https://issues.apache.org/jira/browse/HDDS-284
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-284.00.patch, HDDS-284.01.patch, HDDS-284.02.patch, 
> HDDS-284.03.patch, HDDS-284.04.patch, Interleaving CRC and Error Detection 
> for Containers.pdf
>
>
> This Jira is to add CRC for chunks data.
>  
>  
> Right now a Chunk Info structure looks like this:
>  
> _message ChunkInfo {_
>  _required string chunkName =_ _1__;_
> _required uint64 offset =_ _2__;_
> _required uint64 len =_ _3__;_
> _optional string checksum =_ _4__;_
> _repeated KeyValue metadata =_ _5__;_
> _}_
>  
> _Proposal is to change ChunkInfo structure as below:_
>  
> _message ChunkInfo {_
>  _required string chunkName =_ _1__;_
>  _required uint64 offset =_ _2__;_
>  _required uint64 len =_ _3__;_
>  _optional bytes checksum =_ _4__;_
>  _optional CRCType checksumType =_ _5__;_
>  _optional string legacyMetadata =_ _6__;_
>  _optional string legacyData =_ _7__;_
>  _repeated KeyValue metadata =_ _8__;_
> _}_
>  
> _Instead of changing disk format, we put the checksum, checksumtype and 
> legacy data fields in to chunkInfo._
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-8) Add OzoneManager Delegation Token support

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-8:

Fix Version/s: (was: 0.3.0)
   0.4.0

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch, HDDS-8-HDDS-4.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13950) ACL documentation update to indicate that ACL entries are capped by 32

2018-10-04 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13950:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~adam.antal]!

> ACL documentation update to indicate that ACL entries are capped by 32
> --
>
> Key: HDFS-13950
> URL: https://issues.apache.org/jira/browse/HDFS-13950
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13950.001.patch
>
>
> The hadoop documentation does not contain the information that the ACL 
> entries of a file or dir are capped by 32. My proposal is to add a single 
> line to the md file informing the users about this.
> Remark: this is indeed the maximum as (from AclTransformation.java)
> {code:java}
> private static final int MAX_ENTRIES = 32;{code}
> is set as such.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-572) Support S3 buckets as first class objects in Ozone Manager - 1

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-572:
--
Target Version/s: 0.3.0, 0.2.2  (was: 0.3.0)

> Support S3 buckets as first class objects in Ozone Manager - 1
> --
>
> Key: HDDS-572
> URL: https://issues.apache.org/jira/browse/HDDS-572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-572.001.patch, HDDS-572.002.patch
>
>
> This Jira proposes to add support for S3 buckets as first class objects in 
> Ozone Manager. Currently we take the Ozone volume via the endPoint URL in AWS 
> sdk. With this(and the next 2 patchs), we can move away from using ozone 
> volume in the URL.
> cc: [~elek], [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-04 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13877:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~smeng] for the patch contribution and [~ljain] for 
review.

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch, HDFS-13877.004.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-10-04 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13877:
---
Fix Version/s: 3.3.0

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13877.001.patch, HDFS-13877.001.patch, 
> HDFS-13877.002.patch, HDFS-13877.003.patch, HDFS-13877.004.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13956) iNotify should include information to identify a file as either replicated or erasure coded

2018-10-04 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638945#comment-16638945
 ] 

Wei-Chiu Chuang commented on HDFS-13956:


Thanks [~hgadre], your patch makes sense to me.
Question: is the goal to identify a file is replicated or erasure-coded? Or do 
you intend to identify the ec policy?
If it's for the former, we just need a boolean; if it's for the latter, i'm not 
sure the byte number of ec policy means too much outside the NN. IMO, the byte 
number is an internal data structure that shouldn't/doesn't mean too much 
outside the NN.

> iNotify should include information to identify a file as either replicated or 
> erasure coded
> ---
>
> Key: HDFS-13956
> URL: https://issues.apache.org/jira/browse/HDFS-13956
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13956-001.patch, HDFS-13956-002.patch, 
> HDFS-13956-003.patch
>
>
> Currently iNotify does not provide information to identify if a given file is 
> using replication or erasure coding mode. This would be very useful for the 
> downstream applications using iNotify functionality (e.g. to tag/search files 
> using erasure coding).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13961) TestObserverNode refactoring

2018-10-04 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13961:
---
Attachment: HDFS-13961-HDFS-12943.001.patch

> TestObserverNode refactoring
> 
>
> Key: HDFS-13961
> URL: https://issues.apache.org/jira/browse/HDFS-13961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13961-HDFS-12943.001.patch
>
>
> TestObserverNode combines unit tests for ObserverNode. The tests are of 
> different types. I propose to split them into separate modules, factor out 
> common methods, and optimize it so that it starts and shuts down 
> MIniHDFSCluster once for the entire test rather than for individual test 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13961) TestObserverNode refactoring

2018-10-04 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13961:
---
Status: Patch Available  (was: Open)

* I split {{TestObserverNode}} into 3 modules:
*# Main functionality {{TestObserverNode}}
*# Multiple Observers {{TestMultiObserverNode}}
*# Consistency of reads from Observers {{TestConsistentReadsObserver}}
* Made sure that the mini-cluster is spawn off only once per each module
* Moved common methods into {{HATestUtil}}  and {{MiniDFSCluster}}
* Corrected implementation in several places

[~vagarychen], [~csun] please take a look if you can.


> TestObserverNode refactoring
> 
>
> Key: HDFS-13961
> URL: https://issues.apache.org/jira/browse/HDFS-13961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13961-HDFS-12943.001.patch
>
>
> TestObserverNode combines unit tests for ObserverNode. The tests are of 
> different types. I propose to split them into separate modules, factor out 
> common methods, and optimize it so that it starts and shuts down 
> MIniHDFSCluster once for the entire test rather than for individual test 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-572) Support S3 buckets as first class objects in Ozone Manager - 1

2018-10-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-572:
--
Attachment: HDDS-572.002.patch

> Support S3 buckets as first class objects in Ozone Manager - 1
> --
>
> Key: HDDS-572
> URL: https://issues.apache.org/jira/browse/HDDS-572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-572.001.patch, HDDS-572.002.patch
>
>
> This Jira proposes to add support for S3 buckets as first class objects in 
> Ozone Manager. Currently we take the Ozone volume via the endPoint URL in AWS 
> sdk. With this(and the next 2 patchs), we can move away from using ozone 
> volume in the URL.
> cc: [~elek], [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-04 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638949#comment-16638949
 ] 

Wei-Chiu Chuang commented on HDFS-13926:


I re-ran the jenkins job again. Please wait. (Or you can download Yetus and use 
Yetus to perform precommit locally)

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13961) TestObserverNode refactoring

2018-10-04 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-13961:
--

 Summary: TestObserverNode refactoring
 Key: HDFS-13961
 URL: https://issues.apache.org/jira/browse/HDFS-13961
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-12943
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


TestObserverNode combines unit tests for ObserverNode. The tests are of 
different types. I propose to split them into separate modules, factor out 
common methods, and optimize it so that it starts and shuts down 
MIniHDFSCluster once for the entire test rather than for individual test cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-10) Add kdc docker image for secure ozone cluster

2018-10-04 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638886#comment-16638886
 ] 

Ajay Kumar commented on HDDS-10:


[~xyao] thanks for review and commit. Created [HDDS-574] to take care of binary 
file.

> Add kdc docker image for secure ozone cluster
> -
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch, HDDS-10-HDDS-4.03.patch, HDDS-10-HDDS-4.05.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-574) Replace binary file required for kdc with script

2018-10-04 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-574:

Issue Type: Sub-task  (was: Improvement)
Parent: HDDS-4

> Replace binary file required for kdc with script
> 
>
> Key: HDDS-574
> URL: https://issues.apache.org/jira/browse/HDDS-574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Priority: Major
>
> KDC Docker image used to boot up secure Ozone cluster contains a binary file. 
> We should replace it with script/source-code with same functionality. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-574) Replace binary file required for kdc with script

2018-10-04 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-574:
---

 Summary: Replace binary file required for kdc with script
 Key: HDDS-574
 URL: https://issues.apache.org/jira/browse/HDDS-574
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Ajay Kumar


KDC Docker image used to boot up secure Ozone cluster contains a binary file. 
We should replace it with script/source-code with same functionality. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-568) Ozone sh unable to delete volume

2018-10-04 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638870#comment-16638870
 ] 

Arpit Agarwal commented on HDDS-568:


Thanks for reporting this issue [~ssulav]. We'll take a look at it.

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Blocker
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-539) ozone datanode ignores the invalid options

2018-10-04 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-539:
---
Labels: newbie  (was: )

> ozone datanode ignores the invalid options
> --
>
> Key: HDDS-539
> URL: https://issues.apache.org/jira/browse/HDDS-539
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>  Labels: newbie
>
> ozone datanode command starts datanode and ignores the invalid option, apart 
> from help
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -help
> Starts HDDS Datanode
> {code}
> For all the other invalid options, it just ignores and starts the DN like 
> below:
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -ABC
> 2018-09-22 00:59:34,462 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting HddsDatanodeService
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-481027-01-02.hwx.site/172.27.54.20
> STARTUP_MSG: args = [-ABC]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> /root/ozone-0.3.0-SNAPSHOT/etc/hadoop:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-cli-1.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/guava-11.0.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-3.2.0-SNAPSHOT.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsr305-3.0.0.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-compress-1.4.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-collections-3.2.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsp-api-2.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/zookeeper-3.4.9.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/gson-2.2.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/token-provider-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/dnsjava-2.1.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/avro-1.7.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-json-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/stax2-api-3.1.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/log4j-1.2.17.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/accessors-smart-1.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-lang3-3.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-server-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/netty-3.10.5.Final.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/snappy-java-1.0.5.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerby-config-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerby-util-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/httpclient-4.5.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/hadoop-annotations-3.2.0-SNAPSHOT.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/re2j-1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-math3-3.1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-logging-1.1.3.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-core-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-client-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsch-0.1.54.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-servlet-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/asm-5.0.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-core-2.9.5.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-core-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/json-smart-2.3.jar:/r

[jira] [Updated] (HDDS-563) Support hybrid VirtualHosty style URL

2018-10-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-563:
--
Description: 

"I found that we need to support an url scheme where the volume comes from the 
domain ([http://vol1.s3g/]...) but the bucket is used as path style 
([http://vol1.s3g/bucket]). It seems that both goofys and the existing s3a unit 
tests (not sure, but it seems) requires this schema."

So hybrid means that the volume is identified based on the host name but bucket 
name comes from url postfix.

This Jira is created from [~elek] comments on HDDS-525 jira.

  was:
a) the host HTTP header sometimes contains the port, sometimes not (with aws 
cli we have the port, with mitm proxy we doesn't). Would be easier to remove it 
anyway to make it easier to configure.

b) I found that we need to support an url scheme where the volume comes from 
the domain ([http://vol1.s3g/]...) but the bucket is used as path style 
([http://vol1.s3g/bucket]). It seems that both goofys and the existing s3a unit 
tests (not sure, but it seems) requires this schema.

 

This Jira is created from [~elek] comments on HDDS-525 jira.


> Support hybrid VirtualHosty style URL
> -
>
> Key: HDDS-563
> URL: https://issues.apache.org/jira/browse/HDDS-563
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> "I found that we need to support an url scheme where the volume comes from 
> the domain ([http://vol1.s3g/]...) but the bucket is used as path style 
> ([http://vol1.s3g/bucket]). It seems that both goofys and the existing s3a 
> unit tests (not sure, but it seems) requires this schema."
> So hybrid means that the volume is identified based on the host name but 
> bucket name comes from url postfix.
> This Jira is created from [~elek] comments on HDDS-525 jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-10) Add kdc docker image for secure ozone cluster

2018-10-04 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-10:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution and all for the reviews. I've commit the 
patch to feature branch. 

> Add kdc docker image for secure ozone cluster
> -
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch, HDDS-10-HDDS-4.03.patch, HDDS-10-HDDS-4.05.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-571) Update SCM chill mode exit criteria to optionally wait for n datanodes

2018-10-04 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638876#comment-16638876
 ] 

Ajay Kumar commented on HDDS-571:
-

Will increase the timeout for failed test with any review comments.

> Update SCM chill mode exit criteria to optionally wait for n datanodes
> --
>
> Key: HDDS-571
> URL: https://issues.apache.org/jira/browse/HDDS-571
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-571.00.patch
>
>
> As suggested by [~arpitagarwal], [~anu] in [HDDS-512], this jira is to update 
> SCM chill mode exit criteria to optionally wait for n datanodes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-512) update test.sh to remove robot framework & python-pip installation

2018-10-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638873#comment-16638873
 ] 

Hadoop QA commented on HDDS-512:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-512 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942443/HDDS-512.002.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 989585345e2e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e60b797 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1279/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1279/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> update test.sh to remove robot framework & python-pip installation
> --
>
> Key: HDDS-512
> URL: https://issues.apache.org/jira/browse/HDDS-512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-512.001.patch, HDDS-512.002.patch
>
>
> update test.sh to remove robot framework & python-pip installation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-573) Make VirtualHostStyleFilter port agnostic

2018-10-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-573:
-

Assignee: Elek, Marton

> Make VirtualHostStyleFilter port agnostic
> -
>
> Key: HDDS-573
> URL: https://issues.apache.org/jira/browse/HDDS-573
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
>
> Based on the discussion in HDDS-525
> The host HTTP header sometimes contains the port, sometimes not (with aws cli 
> we have the port, with mitm proxy we doesn't). Would be easier to remove it 
> anyway to make it easier to configure the s3 gateway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-563) Support hybrid VirtualHosty style URL

2018-10-04 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638871#comment-16638871
 ] 

Elek, Marton commented on HDDS-563:
---

Moved the second requirement to a separated jira to make it easier to review. 
HDDS-573

> Support hybrid VirtualHosty style URL
> -
>
> Key: HDDS-563
> URL: https://issues.apache.org/jira/browse/HDDS-563
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> "I found that we need to support an url scheme where the volume comes from 
> the domain ([http://vol1.s3g/]...) but the bucket is used as path style 
> ([http://vol1.s3g/bucket]). It seems that both goofys and the existing s3a 
> unit tests (not sure, but it seems) requires this schema."
> So hybrid means that the volume is identified based on the host name but 
> bucket name comes from url postfix.
> This Jira is created from [~elek] comments on HDDS-525 jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-568) Ozone sh unable to delete volume

2018-10-04 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-568:
---
Priority: Blocker  (was: Major)

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Blocker
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-573) Make VirtualHostStyleFilter port agnostic

2018-10-04 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-573:
-

 Summary: Make VirtualHostStyleFilter port agnostic
 Key: HDDS-573
 URL: https://issues.apache.org/jira/browse/HDDS-573
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton


Based on the discussion in HDDS-525

The host HTTP header sometimes contains the port, sometimes not (with aws cli 
we have the port, with mitm proxy we doesn't). Would be easier to remove it 
anyway to make it easier to configure the s3 gateway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-563) Support hybrid VirtualHosty style URL

2018-10-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-563:
--
Labels: newbie  (was: )

> Support hybrid VirtualHosty style URL
> -
>
> Key: HDDS-563
> URL: https://issues.apache.org/jira/browse/HDDS-563
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> "I found that we need to support an url scheme where the volume comes from 
> the domain ([http://vol1.s3g/]...) but the bucket is used as path style 
> ([http://vol1.s3g/bucket]). It seems that both goofys and the existing s3a 
> unit tests (not sure, but it seems) requires this schema."
> So hybrid means that the volume is identified based on the host name but 
> bucket name comes from url postfix.
> This Jira is created from [~elek] comments on HDDS-525 jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-563) Support hybrid VirtualHosty style URL

2018-10-04 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-563:
--
Summary: Support hybrid VirtualHosty style URL  (was: Improve 
VirtualHoststyle filter)

> Support hybrid VirtualHosty style URL
> -
>
> Key: HDDS-563
> URL: https://issues.apache.org/jira/browse/HDDS-563
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> a) the host HTTP header sometimes contains the port, sometimes not (with aws 
> cli we have the port, with mitm proxy we doesn't). Would be easier to remove 
> it anyway to make it easier to configure.
> b) I found that we need to support an url scheme where the volume comes from 
> the domain ([http://vol1.s3g/]...) but the bucket is used as path style 
> ([http://vol1.s3g/bucket]). It seems that both goofys and the existing s3a 
> unit tests (not sure, but it seems) requires this schema.
>  
> This Jira is created from [~elek] comments on HDDS-525 jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-10) Add kdc docker image for secure ozone cluster

2018-10-04 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-10:
---
Summary: Add kdc docker image for secure ozone cluster  (was: docker 
changes to test secure ozone cluster)

> Add kdc docker image for secure ozone cluster
> -
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch, HDDS-10-HDDS-4.03.patch, HDDS-10-HDDS-4.05.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >