[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=291832=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291832
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 05:57
Start Date: 09/Aug/19 05:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-519789252
 
 
   Rebased on top of the latest trunk, as HDDS-1884 got checked in.
   Now it is ready for review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291832)
Time Spent: 50m  (was: 40m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=291833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291833
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 09/Aug/19 05:57
Start Date: 09/Aug/19 05:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-519789277
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291833)
Time Spent: 1h  (was: 50m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=291826=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291826
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 09/Aug/19 05:43
Start Date: 09/Aug/19 05:43
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1259: HDDS-1105 
: Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
Manager
URL: https://github.com/apache/hadoop/pull/1259
 
 
   Added mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
Manager. Recon will make RPC calls to OM to get delta updates from the latest 
sequence number of its own OM snapshot DB. After applying the changes to its OM 
DB, the updates are passed on to the set of tasks that are "listening" on OM DB 
updates. 
   
   Other than the core logic for the above, the patch : 
   - Cleans up the unit test code
   - Fixes issues in OM DB updates sender
   - Removes the need for powermock in recon unit tests.
   - Added guice injection to Task framework.
   - Cleans up contract of Recon task interface. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291826)
Time Spent: 10m
Remaining Estimate: 0h

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1105:
-
Labels: pull-request-available  (was: )

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12125) Document the missing EC removePolicy command

2019-08-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-12125:
--
Summary: Document the missing EC removePolicy command  (was: Document the 
missing -removePolicy command of ec)

> Document the missing EC removePolicy command
> 
>
> Key: HDFS-12125
> URL: https://issues.apache.org/jira/browse/HDFS-12125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-12125.001.patch
>
>
> Document the missing command -removePolicy in HDFSErasureCoding.md and 
> HDFSCommands.md and regroup the ec commands to improve the user experience.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12125) Document the missing EC removePolicy command

2019-08-08 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903578#comment-16903578
 ] 

Siyao Meng commented on HDFS-12125:
---

It's a bit confusing with both patch and PR without clarification. I figured 
they should be the same.
Anyway, we are limiting the change of this jira to only "Document the missing 
removePolicy command". So the change is minimal.
New PR posted: https://github.com/apache/hadoop/pull/1258

> Document the missing EC removePolicy command
> 
>
> Key: HDFS-12125
> URL: https://issues.apache.org/jira/browse/HDFS-12125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-12125.001.patch
>
>
> Document the missing command -removePolicy in HDFSErasureCoding.md and 
> HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12125) Document the missing EC removePolicy command

2019-08-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-12125:
--
Description: Document the missing command -removePolicy in 
HDFSErasureCoding.md and HDFSCommands.md.  (was: Document the missing command 
-removePolicy in HDFSErasureCoding.md and HDFSCommands.md and regroup the ec 
commands to improve the user experience.)

> Document the missing EC removePolicy command
> 
>
> Key: HDFS-12125
> URL: https://issues.apache.org/jira/browse/HDFS-12125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-12125.001.patch
>
>
> Document the missing command -removePolicy in HDFSErasureCoding.md and 
> HDFSCommands.md.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1934) TestSecureOzoneCluster may fail due to port conflict

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903569#comment-16903569
 ] 

Hudson commented on HDDS-1934:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17069 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17069/])
HDDS-1934. TestSecureOzoneCluster may fail due to port conflict (#1254) 
(bharat: rev 88ed1e0bfd6652d1803ebae0b3e743316cc8d11e)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java


> TestSecureOzoneCluster may fail due to port conflict
> 
>
> Key: HDDS-1934
> URL: https://issues.apache.org/jira/browse/HDDS-1934
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestSecureOzoneCluster}} fails if SCM is already running on same host.
> Steps to reproduce:
> # Start {{ozone}} docker compose cluster
> # Run {{TestSecureOzoneCluster}} test
> {noformat:title=https://ci.anzix.net/job/ozone/17602/consoleText}
> [ERROR] Tests run: 10, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 49.821 s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> [ERROR] 
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 6.59 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:277)
> ...
> [ERROR] testSecureOmReInit(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
> Time elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmReInit(TestSecureOzoneCluster.java:743)
> ...
> [ERROR] 
> testSecureOmInitSuccess(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmInitSuccess(TestSecureOzoneCluster.java:789)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1934) TestSecureOzoneCluster may fail due to port conflict

2019-08-08 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1934:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> TestSecureOzoneCluster may fail due to port conflict
> 
>
> Key: HDDS-1934
> URL: https://issues.apache.org/jira/browse/HDDS-1934
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestSecureOzoneCluster}} fails if SCM is already running on same host.
> Steps to reproduce:
> # Start {{ozone}} docker compose cluster
> # Run {{TestSecureOzoneCluster}} test
> {noformat:title=https://ci.anzix.net/job/ozone/17602/consoleText}
> [ERROR] Tests run: 10, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 49.821 s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> [ERROR] 
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 6.59 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:277)
> ...
> [ERROR] testSecureOmReInit(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
> Time elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmReInit(TestSecureOzoneCluster.java:743)
> ...
> [ERROR] 
> testSecureOmInitSuccess(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmInitSuccess(TestSecureOzoneCluster.java:789)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1934) TestSecureOzoneCluster may fail due to port conflict

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1934?focusedWorklogId=291801=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291801
 ]

ASF GitHub Bot logged work on HDDS-1934:


Author: ASF GitHub Bot
Created on: 09/Aug/19 04:38
Start Date: 09/Aug/19 04:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1254: 
HDDS-1934. TestSecureOzoneCluster may fail due to port conflict
URL: https://github.com/apache/hadoop/pull/1254
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291801)
Time Spent: 40m  (was: 0.5h)

> TestSecureOzoneCluster may fail due to port conflict
> 
>
> Key: HDDS-1934
> URL: https://issues.apache.org/jira/browse/HDDS-1934
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{TestSecureOzoneCluster}} fails if SCM is already running on same host.
> Steps to reproduce:
> # Start {{ozone}} docker compose cluster
> # Run {{TestSecureOzoneCluster}} test
> {noformat:title=https://ci.anzix.net/job/ozone/17602/consoleText}
> [ERROR] Tests run: 10, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 49.821 s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> [ERROR] 
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 6.59 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:277)
> ...
> [ERROR] testSecureOmReInit(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
> Time elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmReInit(TestSecureOzoneCluster.java:743)
> ...
> [ERROR] 
> testSecureOmInitSuccess(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmInitSuccess(TestSecureOzoneCluster.java:789)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1934) TestSecureOzoneCluster may fail due to port conflict

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1934?focusedWorklogId=291802=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291802
 ]

ASF GitHub Bot logged work on HDDS-1934:


Author: ASF GitHub Bot
Created on: 09/Aug/19 04:38
Start Date: 09/Aug/19 04:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1254: HDDS-1934. 
TestSecureOzoneCluster may fail due to port conflict
URL: https://github.com/apache/hadoop/pull/1254#issuecomment-519774468
 
 
   Thank You @adoroszlai for the contribution.
   I have committed this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291802)
Time Spent: 50m  (was: 40m)

> TestSecureOzoneCluster may fail due to port conflict
> 
>
> Key: HDDS-1934
> URL: https://issues.apache.org/jira/browse/HDDS-1934
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestSecureOzoneCluster}} fails if SCM is already running on same host.
> Steps to reproduce:
> # Start {{ozone}} docker compose cluster
> # Run {{TestSecureOzoneCluster}} test
> {noformat:title=https://ci.anzix.net/job/ozone/17602/consoleText}
> [ERROR] Tests run: 10, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 
> 49.821 s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> [ERROR] 
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 6.59 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:277)
> ...
> [ERROR] testSecureOmReInit(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
> Time elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmReInit(TestSecureOzoneCluster.java:743)
> ...
> [ERROR] 
> testSecureOmInitSuccess(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 5.312 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
>   at 
> org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmInitSuccess(TestSecureOzoneCluster.java:789)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903558#comment-16903558
 ] 

Hudson commented on HDDS-1884:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17068 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17068/])
HDDS-1884. Support Bucket ACL operations for OM HA. (#1202) (github: rev 
91f41b7d885d7b0f3abf132a5c8e8812fb179330)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketRemoveAclRequest.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/util/package-info.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAclRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/util/ObjectParser.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/BooleanBiFunction.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAddAclRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/acl/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/acl/OMBucketAclResponse.java


> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1884:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291796=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291796
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 09/Aug/19 04:29
Start Date: 09/Aug/19 04:29
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1202: 
HDDS-1884. Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291796)
Time Spent: 9h 40m  (was: 9.5h)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291795=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291795
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 09/Aug/19 04:28
Start Date: 09/Aug/19 04:28
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519772720
 
 
   Test failures are not related to this PR.
   Thank You @xiaoyuyao for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291795)
Time Spent: 9.5h  (was: 9h 20m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14696) Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a Util class)

2019-08-08 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903553#comment-16903553
 ] 

Siyao Meng commented on HDFS-14696:
---

Thanks [~jojochuang] for reviewing/committing!

> Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a 
> Util class)
> 
>
> Key: HDFS-14696
> URL: https://issues.apache.org/jira/browse/HDFS-14696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HDFS-14696-branch-2.003.patch
>
>
> Backporting HDFS-11273 Move TransferFsImage#doGetUrl function to a Util class 
> to branch-2.
> To avoid confusion with branch-2 patches in HDFS-11273, patch revision number 
> will continue from 003.
> *HDFS-14696-branch-2.003.patch* is the same as 
> *HDFS-11273-branch-2.003.patch*.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-08-08 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903443#comment-16903443
 ] 

Arpit Agarwal edited comment on HDDS-1554 at 8/9/19 4:18 AM:
-

A few comments on the test case implementations.
 # {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code can 
probably be removed, since it's really testing that the cluster is read-only in 
safe mode. We have unit tests for that:
{code:java}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}

 # {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one.
 # {{ITDiskCorruption#addCorruption:72}} - looks like we have a hard-coded 
path. Should we get from configuration instead?
 # {{ITDiskCorruption#testUpload}} - The corruption implementation is bit of a 
heavy hammer, it is replacing the content of all meta files. Is it possible to 
make it reflect real-world corruption where a part of the file may be 
corrupted. Also we should probably restart the cluster after corrupting RocksDB 
meta files.
 # {{ITDiskCorruption#testDownload:161}} - should we just remove the assertTrue 
since it is no-op?
{code:java}
  Assert.assertTrue("Download File test passed.", true);
{code}

 


was (Author: arpitagarwal):
A few comments on the test case implementations.
# {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code can 
probably be removed, since it's really testing that the cluster is read-only in 
safe mode. We have unit tests for that:
{code}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}
# {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one.
# {{ITDiskCorruption#addCorruption:72}} - looks like we have a hard-coded path. 
Should we get from configuration instead?
# {{ITDiskCorruption#testUpload}} - The corruption implementation is bit of a 
heavy hammer, it is replacing the content of all meta files. Is it possible to 
make it reflect real-world corruption where a part of the file may be 
corrupted. Also we should probably restart the cluster after corrupting RocksDB 
meta files.
# {{ITDiskCorruption#testDownload:161}} - should we just remove the assertTrue 
since it is no-op?
{code}
  Assert.assertTrue("Download File test passed.", true);
{code}

Still reviewing the rest.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-08-08 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903443#comment-16903443
 ] 

Arpit Agarwal edited comment on HDDS-1554 at 8/9/19 4:17 AM:
-

A few comments on the test case implementations.
# {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code can 
probably be removed, since it's really testing that the cluster is read-only in 
safe mode. We have unit tests for that:
{code}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}
# {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one.
# {{ITDiskCorruption#addCorruption:72}} - looks like we have a hard-coded path. 
Should we get from configuration instead?
# {{ITDiskCorruption#testUpload}} - The corruption implementation is bit of a 
heavy hammer, it is replacing the content of all meta files. Is it possible to 
make it reflect real-world corruption where a part of the file may be 
corrupted. Also we should probably restart the cluster after corrupting RocksDB 
meta files.
# {{ITDiskCorruption#testDownload:161}} - should we just remove the assertTrue 
since it is no-op?
{code}
  Assert.assertTrue("Download File test passed.", true);
{code}

Still reviewing the rest.


was (Author: arpitagarwal):
Looking at the test case implementations:
# {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code from 
can be removed, since it's really testing that the cluster is read-only in safe 
mode. We have unit tests for that:
{code}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}
# {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one. Once we have ensured that read-only disk forces us to remain in 
safe mode, the rest of the checks should be covered by safe-mode unit tests.

Still reviewing the rest.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-1938:
-
Description: 
The diff will be based on HDDS-1891.

Goal:
1. Change omPort type to int because it is eventually used as int anyway
2. Refactor the parser code in BasicOzoneFileSystem#initialize

Will post a PR after HDDS-1891 is merged.

  was:
The diff will be based on HDDS-1891.

Goal:
1. Change omPort type to int because it is eventually used as int anyway
2. Refactor the parser code in BasicOzoneFileSystem#initialize


> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-1938.001.patch
>
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-1938:
-
Description: 
The diff will be based on HDDS-1891.

Goal:
1. Change omPort type to int because it is eventually used as int anyway
2. Refactor the parser code in BasicOzoneFileSystem#initialize

  was:
The diff will be based on HDDS-1891.

Goal:
1. Change omPort type to int because it is eventually used as int anyway
2. Refactor the parse code


> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-1938.001.patch
>
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-08 Thread Siyao Meng (JIRA)
Siyao Meng created HDDS-1938:


 Summary: Change omPort parameter type from String to int in 
BasicOzoneFileSystem#createAdapter
 Key: HDDS-1938
 URL: https://issues.apache.org/jira/browse/HDDS-1938
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Filesystem
Reporter: Siyao Meng
Assignee: Siyao Meng
 Attachments: HDDS-1938.001.patch

The diff will be based on HDDS-1891.

Goal:
1. Change omPort type to int because it is eventually used as int anyway
2. Refactor the parse code



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-1938:
-
Attachment: HDDS-1938.001.patch

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-1938.001.patch
>
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parse code



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291735=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291735
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 09/Aug/19 03:06
Start Date: 09/Aug/19 03:06
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1218: HDDS-1891. Ozone fs 
shell command should work with default port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1218#issuecomment-519760099
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291735)
Time Spent: 2.5h  (was: 2h 20m)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291734=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291734
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 09/Aug/19 03:06
Start Date: 09/Aug/19 03:06
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1218: HDDS-1891. Ozone fs 
shell command should work with default port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1218#issuecomment-519760075
 
 
   Fixed the code and the unit test. The following fs commands all work as I 
tested in docker:
   ```bash
   ozone fs -put README.txt o3fs:///
   ozone fs -ls /
   ozone fs -ls o3fs://bucket.volume.om/
   ozone fs -ls o3fs://bucket.volume.om:9862/
   ozone fs -ls o3fs://bucket.volume/
   ozone fs -get /README.txt R.txt
   ozone fs -get o3fs://bucket.volume.om/README.txt R2.txt
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291734)
Time Spent: 2h 20m  (was: 2h 10m)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14710) RBF:Improve some RPC performances

2019-08-08 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14710:

Attachment: HDFS-14710-trunk-001.patch

> RBF:Improve some RPC performances
> -
>
> Key: HDFS-14710
> URL: https://issues.apache.org/jira/browse/HDFS-14710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Priority: Minor
> Attachments: HDFS-14710-trunk-001.patch
>
>
> We can improve some RPC performance if the extendedBlock is not null.
> Such as addBlock, getAdditionalDatanode and complete.
> Since HDFS encourages user to write large files, so the extendedBlock is not 
> null in most case.
> In the scenario of Multiple Destination and large file, the effect is more 
> obvious.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14710) RBF:Improve some RPC performances

2019-08-08 Thread xuzq (JIRA)
xuzq created HDFS-14710:
---

 Summary: RBF:Improve some RPC performances
 Key: HDFS-14710
 URL: https://issues.apache.org/jira/browse/HDFS-14710
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: xuzq


We can improve some RPC performance if the extendedBlock is not null.

Such as addBlock, getAdditionalDatanode and complete.

Since HDFS encourages user to write large files, so the extendedBlock is not 
null in most case.

In the scenario of Multiple Destination and large file, the effect is more 
obvious.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14099) Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor

2019-08-08 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903502#comment-16903502
 ] 

xuzq commented on HDFS-14099:
-

[~aajisaka] [~jlowe] [~churromorales] have a look? thanks

> Unknown frame descriptor when decompressing multiple frames in 
> ZStandardDecompressor
> 
>
> Key: HDFS-14099
> URL: https://issues.apache.org/jira/browse/HDFS-14099
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Hadoop Version: hadoop-3.0.3
> Java Version: 1.8.0_144
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14099-trunk-001.patch
>
>
> We need to use the ZSTD compression algorithm in Hadoop. So I write a simple 
> demo like this for testing.
> {code:java}
> // code placeholder
> while ((size = fsDataInputStream.read(bufferV2)) > 0 ) {
>   countSize += size;
>   if (countSize == 65536 * 8) {
> if(!isFinished) {
>   // finish a frame in zstd
>   cmpOut.finish();
>   isFinished = true;
> }
> fsDataOutputStream.flush();
> fsDataOutputStream.hflush();
>   }
>   if(isFinished) {
> LOG.info("Will resetState. N=" + n);
> // reset the stream and write again
> cmpOut.resetState();
> isFinished = false;
>   }
>   cmpOut.write(bufferV2, 0, size);
>   bufferV2 = new byte[5 * 1024 * 1024];
>   n++;
> }
> {code}
>  
> And I use "*hadoop fs -text*"  to read this file and failed. The error as 
> blow.
> {code:java}
> Exception in thread "main" java.lang.InternalError: Unknown frame descriptor
> at 
> org.apache.hadoop.io.compress.zstd.ZStandardDecompressor.inflateBytesDirect(Native
>  Method)
> at 
> org.apache.hadoop.io.compress.zstd.ZStandardDecompressor.decompress(ZStandardDecompressor.java:181)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:98)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> {code}
>  
> So I had to look the code, include jni, then found this bug.
> *ZSTD_initDStream(stream)* method may by called twice in the same *Frame*.
> The first is  in *ZStandardDecompressor.c.* 
> {code:java}
> if (size == 0) {
> (*env)->SetBooleanField(env, this, ZStandardDecompressor_finished, 
> JNI_TRUE);
> size_t result = dlsym_ZSTD_initDStream(stream);
> if (dlsym_ZSTD_isError(result)) {
> THROW(env, "java/lang/InternalError", 
> dlsym_ZSTD_getErrorName(result));
> return (jint) 0;
> }
> }
> {code}
> This call here is correct, but *Finished* no longer be set to false, even if 
> there is some datas (a new frame) in *CompressedBuffer* or *UserBuffer* need 
> to be decompressed.
> The second is in *org.apache.hadoop.io.compress.DecompressorStream* by 
> *decompressor.reset()*, because *Finished* is always true after decompressed 
> a *Frame*.
> {code:java}
> if (decompressor.finished()) {
>   // First see if there was any leftover buffered input from previous
>   // stream; if not, attempt to refill buffer.  If refill -> EOF, we're
>   // all done; else reset, fix up input buffer, and get ready for next
>   // concatenated substream/"member".
>   int nRemaining = decompressor.getRemaining();
>   if (nRemaining == 0) {
> int m = getCompressedData();
> if (m == -1) {
>   // apparently the previous end-of-stream was also end-of-file:
>   // return success, as if we had never called getCompressedData()
>   eof = true;
>   return -1;
> }
> decompressor.reset();
> decompressor.setInput(buffer, 0, m);
> lastBytesSent = m;
>   } else {
> // looks like it's a concatenated stream:  reset low-level zlib (or
> // other engine) and buffers, then "resend" remaining input data
> 

[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-08 Thread wangzhaohui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903495#comment-16903495
 ] 

wangzhaohui commented on HDFS-14674:


Hi,[~csun]   [~shv],this patch v006 add unit test and fix checkstyle,pls take a 
look,thanks!

> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, HDFS-14674-006.patch, image.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 
> the editlog util 500,the current namenode will load the next editlog,but 
> editlog more than 500.So,namenode got an unexpected txid when tail editlog.
>  
>  
> {code:java}
> //
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> 

[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-08 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903491#comment-16903491
 ] 

Konstantin Shvachko commented on HDFS-14204:


+1 last patch looks good.
TestSafeMode fails due to HDFS-12914, and TestDirectoryScanner due to 
HDFS-14303. The rest are passing.


> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch, 
> HDFS-14204-branch-2.006.patch, HDFS-14204-branch-2.007.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291698=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291698
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 09/Aug/19 01:48
Start Date: 09/Aug/19 01:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519746903
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 581 | trunk passed |
   | +1 | compile | 367 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 879 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | trunk passed |
   | 0 | spotbugs | 422 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 624 | trunk passed |
   | -0 | patch | 464 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 545 | the patch passed |
   | +1 | compile | 377 | the patch passed |
   | +1 | cc | 377 | the patch passed |
   | +1 | javac | 377 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 633 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | +1 | findbugs | 612 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 293 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2016 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7690 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 91af5857cbb7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / aa5f445 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/20/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/20/testReport/ |
   | Max. process+thread count | 4953 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/20/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291698)
Time Spent: 9h 20m  (was: 9h 10m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>

[jira] [Commented] (HDFS-14655) SBN : Namenode crashes if one of The JN is down

2019-08-08 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903466#comment-16903466
 ] 

Konstantin Shvachko commented on HDFS-14655:


Hey guys, discussed this with Chen. It seems that we need a pool of only 3 
threads, which can be reused for each iteration of tailing. Here 3 = number of 
Journal Nodes. Creating threads with such high frequency seems to be expensive 
in all aspects. How hard would it be to make this change?

> SBN : Namenode crashes if one of The JN is down
> ---
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903465#comment-16903465
 ] 

Hadoop QA commented on HDFS-14204:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
21s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
19s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
8s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
55s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
50s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
10s{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  2m 10s{color} | 
{color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 10s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
22s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 22s{color} 
| {color:red} root-jdk1.8.0_222 with JDK v1.8.0_222 generated 1 new + 1345 
unchanged - 1 fixed = 1346 total (was 1346) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 32s{color} | {color:orange} root: The patch generated 36 new + 3142 
unchanged - 14 fixed = 3178 total (was 3156) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
55s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 32s{color} 
| {color:red} hadoop-hdfs in the 

[jira] [Commented] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903463#comment-16903463
 ] 

Hadoop QA commented on HDFS-14706:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 1 new + 169 unchanged 
- 6 fixed = 170 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14706 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977075/HDFS-14706.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291686=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291686
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 09/Aug/19 00:54
Start Date: 09/Aug/19 00:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519738137
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 121 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 618 | trunk passed |
   | +1 | compile | 368 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 855 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 443 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 645 | trunk passed |
   | -0 | patch | 481 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 573 | the patch passed |
   | +1 | compile | 368 | the patch passed |
   | +1 | cc | 368 | the patch passed |
   | +1 | javac | 368 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 692 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | -1 | findbugs | 442 | hadoop-ozone generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 403 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2925 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 8913 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Unread field:OzoneBucket.java:[line 145] |
   |  |  Unwritten field:BucketArgs.java:[line 88] |
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 9eca47fd4302 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6ad9a11 |
   | Default Java | 1.8.0_222 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/18/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/18/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/18/testReport/ |
   | Max. process+thread count | 3946 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/18/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291684=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291684
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 09/Aug/19 00:51
Start Date: 09/Aug/19 00:51
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1218: HDDS-1891. Ozone fs 
shell command should work with default port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1218#issuecomment-519737497
 
 
   > > One more comment, I think we should make a similar change in 
BasicOzoneFileSystem.java.
   > > See, can we have a some utility method, which can be used across 2 
classes.
   > 
   > OzoneFileSystem extends BasicOzoneFileSystem, so this has been taken care. 
Just test it out locally.
   
   While I'm testing it in a docker-compose cluster, it seems that the change 
of `authority` 
[here](https://github.com/apache/hadoop/pull/1218/commits/ab93f4bc3fd8d2acf31dac5bbf79c49546eccacd#diff-e48c4ce6b86d4a33c3f38ba8d6d06ea3R129)
 might have broken something. It throws error:
   ```
   bash-4.2$ ozone fs -ls o3fs://bucket.volume.om/
   -ls: Wrong FS: o3fs://bucket.volume.om/, expected: 
o3fs://bucket.volume.om:9862
   ...
   ```
   
   After I commented out this line, compiled and re-tested, it works again:
   ```
   bash-4.2$ ozone fs -ls o3fs://bucket.volume.om/
   Found 1 items
   -rw-rw-rw-   1 hadoop hadoop   1485 1970-01-01 00:46 
o3fs://bucket.volume.om/README.txt
   ```
   
   FYI my test prep steps:
   ```bash
   mvn clean install -f pom.ozone.xml -DskipTests=true -Dmaven.javadoc.skip=true
   cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone
   docker-compose up -d --scale datanode=3
   docker-compose exec om /bin/bash
   ozone sh volume create /volume
   ozone sh bucket create /volume/bucket
   vi /etc/hadoop/core-site.xml
   # Add fs.o3fs.impl and fs.defaultFS config and save
   # Ref: https://hadoop.apache.org/ozone/docs/0.4.0-alpha/ozonefs.html
   ozone fs -put README.txt o3fs:///
   ozone fs -ls /
   ```
   
   Investigating.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291684)
Time Spent: 2h 10m  (was: 2h)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291683=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291683
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 09/Aug/19 00:50
Start Date: 09/Aug/19 00:50
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1218: HDDS-1891. Ozone fs 
shell command should work with default port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1218#issuecomment-519737497
 
 
   > > One more comment, I think we should make a similar change in 
BasicOzoneFileSystem.java.
   > > See, can we have a some utility method, which can be used across 2 
classes.
   > 
   > OzoneFileSystem extends BasicOzoneFileSystem, so this has been taken care. 
Just test it out locally.
   
   While I'm testing it in a docker-compose cluster, it seems that the change 
of `authority` 
[here](https://github.com/apache/hadoop/pull/1218/commits/ab93f4bc3fd8d2acf31dac5bbf79c49546eccacd#diff-e48c4ce6b86d4a33c3f38ba8d6d06ea3R129)
 might have broken something. It throws error:
   ```
   bash-4.2$ ozone fs -ls o3fs://bucket.volume.om/
   -ls: Wrong FS: o3fs://bucket.volume.om/, expected: 
o3fs://bucket.volume.om:9862
   ...
   ```
   
   After I commented out this line, compile and re-test, it works again:
   ```
   bash-4.2$ ozone fs -ls o3fs://bucket.volume.om/
   Found 1 items
   -rw-rw-rw-   1 hadoop hadoop   1485 1970-01-01 00:46 
o3fs://bucket.volume.om/README.txt
   ```
   
   Investigating.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291683)
Time Spent: 2h  (was: 1h 50m)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291682=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291682
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 09/Aug/19 00:50
Start Date: 09/Aug/19 00:50
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1218: HDDS-1891. Ozone fs 
shell command should work with default port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1218#issuecomment-519737497
 
 
   > > One more comment, I think we should make a similar change in 
BasicOzoneFileSystem.java.
   > > See, can we have a some utility method, which can be used across 2 
classes.
   > 
   > OzoneFileSystem extends BasicOzoneFileSystem, so this has been taken care. 
Just test it out locally.
   
   While I'm testing it in a docker-compose cluster, it seems that the change 
of `authority` 
[here](https://github.com/apache/hadoop/pull/1218/commits/ab93f4bc3fd8d2acf31dac5bbf79c49546eccacd#diff-e48c4ce6b86d4a33c3f38ba8d6d06ea3R129)
 might have broken something. It throws error:
   {code}
   bash-4.2$ ozone fs -ls o3fs://bucket.volume.om/
   -ls: Wrong FS: o3fs://bucket.volume.om/, expected: 
o3fs://bucket.volume.om:9862
   ...
   {code}
   
   After I commented out this line, compile and re-test, it works again:
   {code}
   bash-4.2$ ozone fs -ls o3fs://bucket.volume.om/
   Found 1 items
   -rw-rw-rw-   1 hadoop hadoop   1485 1970-01-01 00:46 
o3fs://bucket.volume.om/README.txt
   {code}
   
   Investigating.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291682)
Time Spent: 1h 50m  (was: 1h 40m)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291677=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291677
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 09/Aug/19 00:44
Start Date: 09/Aug/19 00:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519736480
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 592 | trunk passed |
   | +1 | compile | 374 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 878 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 420 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 618 | trunk passed |
   | -0 | patch | 472 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 545 | the patch passed |
   | +1 | compile | 381 | the patch passed |
   | +1 | cc | 381 | the patch passed |
   | +1 | javac | 381 | the patch passed |
   | -0 | checkstyle | 43 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 636 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 324 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2226 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 8037 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux b985ae6f5a93 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6ad9a11 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/19/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/19/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/19/testReport/ |
   | Max. process+thread count | 4685 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/19/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact 

[jira] [Commented] (HDFS-14682) RBF: TestStateStoreFileSystem failed because /tmp/hadoop/dfs/name/current can't be removed

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903446#comment-16903446
 ] 

Hadoop QA commented on HDFS-14682:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m  3s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14682 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977084/HDFS-14682.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 31b4ede4d999 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / aa5f445 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27449/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27449/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 

[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-08-08 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903443#comment-16903443
 ] 

Arpit Agarwal commented on HDDS-1554:
-

Looking at the test case implementations:
# {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code from 
can be removed, since it's really testing that the cluster is read-only in safe 
mode. We have unit tests for that:
{code}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}
# {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one. Once we have ensured that read-only disk forces us to remain in 
safe mode, the rest of the checks should be covered by safe-mode unit tests.

Still reviewing the rest.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-08-08 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903436#comment-16903436
 ] 

Eric Yang commented on HDDS-1554:
-

[~arp] The tests are written to run in integration phase, try:

{code}
mvn verify -Pit,docker-build
{code}

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-08-08 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903430#comment-16903430
 ] 

Konstantin Shvachko commented on HDFS-12914:


Confirmed reverting the patch fixes {{TestSafeMode}}. [~jojochuang] please take 
a look.

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch, HDFS-12914.007.patch, HDFS-12914.008.patch, 
> HDFS-12914.009.patch, HDFS-12914.branch-2.000.patch, 
> HDFS-12914.branch-2.001.patch, HDFS-12914.branch-2.002.patch, 
> HDFS-12914.branch-2.8.001.patch, HDFS-12914.branch-2.8.002.patch, 
> HDFS-12914.branch-2.patch, HDFS-12914.branch-3.0.patch, 
> HDFS-12914.branch-3.1.001.patch, HDFS-12914.branch-3.1.002.patch, 
> HDFS-12914.branch-3.2.patch, HDFS-12914.utfix.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-08-08 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903428#comment-16903428
 ] 

Arpit Agarwal commented on HDDS-1554:
-

Sorry about the delay in getting back to these [~eyang].

I tried running the tests. I used the following command:

{code}
$ mvn test -Pit -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT
...
[INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ 
hadoop-ozone-read-write-tests ---
[INFO] Tests are skipped.
{code}

It looks like the tests were skipped. Any idea what I did wrong?

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291659=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291659
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:55
Start Date: 08/Aug/19 23:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519727991
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291659)
Time Spent: 8h 50m  (was: 8h 40m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=291652=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291652
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:20
Start Date: 08/Aug/19 23:20
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r312281064
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java
 ##
 @@ -62,66 +64,73 @@ public void setUpResultList() {
 (long) i));
   }
 }
+return resultList;
   }
 
   @Test
   public void testGetFileCounts() throws IOException {
-setUpResultList();
+List resultList = setUpResultList();
 
 utilizationService = mock(UtilizationService.class);
 when(utilizationService.getFileCounts()).thenCallRealMethod();
 when(utilizationService.getDao()).thenReturn(fileCountBySizeDao);
 when(fileCountBySizeDao.findAll()).thenReturn(resultList);
 
-utilizationService.getFileCounts();
+Response response = utilizationService.getFileCounts();
+// get result list from Response entity
+List responseList =
+(List) response.getEntity();
+
 verify(utilizationService, times(1)).getFileCounts();
 verify(fileCountBySizeDao, times(1)).findAll();
 
-assertEquals(maxBinSize, resultList.size());
+FileSizeCountTask fileSizeCountTask = mock(FileSizeCountTask.class);
+when(fileSizeCountTask.getMaxFileSizeUpperBound()).
+thenReturn(1125899906842624L);
+when(fileSizeCountTask.getMaxBinSize()).thenReturn(maxBinSize);
+when(fileSizeCountTask.calculateBinIndex(anyLong())).thenCallRealMethod();
+assertEquals(maxBinSize, responseList.size());
+
 long fileSize = 4096L;  // 4KB
-int index =  findIndex(fileSize);
-long count = resultList.get(index).getCount();
+int index =  fileSizeCountTask.calculateBinIndex(fileSize);
+
+long count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842624L;   // 1PB
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
+//last extra bin for files >= 1PB
 assertEquals(maxBinSize - 1, index);
 assertEquals(index, count);
 
 fileSize = 1025L;   // 1 KB + 1B
-index = findIndex(fileSize);
-count = resultList.get(index).getCount(); //last extra bin for files >= 1PB
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 25L;
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842623L;   // 1PB - 1B
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842624L * 4;   // 4 PB
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
 
 Review comment:
   So, what exactly should testGetFileCounts() test?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291652)
Time Spent: 10h 40m  (was: 10.5h)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=291653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291653
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:20
Start Date: 08/Aug/19 23:20
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r312281064
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java
 ##
 @@ -62,66 +64,73 @@ public void setUpResultList() {
 (long) i));
   }
 }
+return resultList;
   }
 
   @Test
   public void testGetFileCounts() throws IOException {
-setUpResultList();
+List resultList = setUpResultList();
 
 utilizationService = mock(UtilizationService.class);
 when(utilizationService.getFileCounts()).thenCallRealMethod();
 when(utilizationService.getDao()).thenReturn(fileCountBySizeDao);
 when(fileCountBySizeDao.findAll()).thenReturn(resultList);
 
-utilizationService.getFileCounts();
+Response response = utilizationService.getFileCounts();
+// get result list from Response entity
+List responseList =
+(List) response.getEntity();
+
 verify(utilizationService, times(1)).getFileCounts();
 verify(fileCountBySizeDao, times(1)).findAll();
 
-assertEquals(maxBinSize, resultList.size());
+FileSizeCountTask fileSizeCountTask = mock(FileSizeCountTask.class);
+when(fileSizeCountTask.getMaxFileSizeUpperBound()).
+thenReturn(1125899906842624L);
+when(fileSizeCountTask.getMaxBinSize()).thenReturn(maxBinSize);
+when(fileSizeCountTask.calculateBinIndex(anyLong())).thenCallRealMethod();
+assertEquals(maxBinSize, responseList.size());
+
 long fileSize = 4096L;  // 4KB
-int index =  findIndex(fileSize);
-long count = resultList.get(index).getCount();
+int index =  fileSizeCountTask.calculateBinIndex(fileSize);
+
+long count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842624L;   // 1PB
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
+//last extra bin for files >= 1PB
 assertEquals(maxBinSize - 1, index);
 assertEquals(index, count);
 
 fileSize = 1025L;   // 1 KB + 1B
-index = findIndex(fileSize);
-count = resultList.get(index).getCount(); //last extra bin for files >= 1PB
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 25L;
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842623L;   // 1PB - 1B
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842624L * 4;   // 4 PB
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
 
 Review comment:
   So, what exactly should testGetFileCounts() test? as part of assertions?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291653)
Time Spent: 10h 50m  (was: 10h 40m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the 

[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291651=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291651
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:18
Start Date: 08/Aug/19 23:18
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1218: HDDS-1891. 
Ozone fs shell command should work with default port when port number is not 
specified
URL: https://github.com/apache/hadoop/pull/1218#discussion_r312280716
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -113,11 +115,14 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 String omPort = String.valueOf(-1);
 if (!isEmpty(remaining)) {
   String[] parts = remaining.split(":");
-  if (parts.length != 2) {
+  // Array length should only be 1 or 2
+  if (parts.length > 2) {
 throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
 
 Review comment:
   Done. Updated `URI_EXCEPTION_TEXT`.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291651)
Time Spent: 1h 40m  (was: 1.5h)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=291649=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291649
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:18
Start Date: 08/Aug/19 23:18
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1146: HDDS-1366. 
Add ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r312280564
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java
 ##
 @@ -62,66 +64,73 @@ public void setUpResultList() {
 (long) i));
   }
 }
+return resultList;
   }
 
   @Test
   public void testGetFileCounts() throws IOException {
-setUpResultList();
+List resultList = setUpResultList();
 
 utilizationService = mock(UtilizationService.class);
 when(utilizationService.getFileCounts()).thenCallRealMethod();
 when(utilizationService.getDao()).thenReturn(fileCountBySizeDao);
 when(fileCountBySizeDao.findAll()).thenReturn(resultList);
 
-utilizationService.getFileCounts();
+Response response = utilizationService.getFileCounts();
+// get result list from Response entity
+List responseList =
+(List) response.getEntity();
+
 verify(utilizationService, times(1)).getFileCounts();
 verify(fileCountBySizeDao, times(1)).findAll();
 
-assertEquals(maxBinSize, resultList.size());
+FileSizeCountTask fileSizeCountTask = mock(FileSizeCountTask.class);
+when(fileSizeCountTask.getMaxFileSizeUpperBound()).
+thenReturn(1125899906842624L);
+when(fileSizeCountTask.getMaxBinSize()).thenReturn(maxBinSize);
+when(fileSizeCountTask.calculateBinIndex(anyLong())).thenCallRealMethod();
+assertEquals(maxBinSize, responseList.size());
+
 long fileSize = 4096L;  // 4KB
-int index =  findIndex(fileSize);
-long count = resultList.get(index).getCount();
+int index =  fileSizeCountTask.calculateBinIndex(fileSize);
+
+long count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842624L;   // 1PB
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
+//last extra bin for files >= 1PB
 assertEquals(maxBinSize - 1, index);
 assertEquals(index, count);
 
 fileSize = 1025L;   // 1 KB + 1B
-index = findIndex(fileSize);
-count = resultList.get(index).getCount(); //last extra bin for files >= 1PB
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 25L;
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842623L;   // 1PB - 1B
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
+count = responseList.get(index).getCount();
 assertEquals(index, count);
 
 fileSize = 1125899906842624L * 4;   // 4 PB
-index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+index = fileSizeCountTask.calculateBinIndex(fileSize);
 
 Review comment:
   These assertions are not needed. FileSizeCountTask working is tested in 
TestFileSizeCountTask unit test class. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291649)
Time Spent: 10.5h  (was: 10h 20m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=291645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291645
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:16
Start Date: 08/Aug/19 23:16
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1146: HDDS-1366. 
Add ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r312278472
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -67,19 +72,22 @@ public FileSizeCountTask(OMMetadataManager 
omMetadataManager,
 upperBoundCount = new long[getMaxBinSize()];
   }
 
-  protected long getOneKB() {
+  @VisibleForTesting
+  public long getOneKB() {
 
 Review comment:
   public method does not need VisibleForTesting annotation.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291645)
Time Spent: 10h 20m  (was: 10h 10m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=291646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291646
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:16
Start Date: 08/Aug/19 23:16
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1146: HDDS-1366. 
Add ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r312278997
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java
 ##
 @@ -62,66 +64,73 @@ public void setUpResultList() {
 (long) i));
   }
 }
+return resultList;
   }
 
   @Test
   public void testGetFileCounts() throws IOException {
-setUpResultList();
+List resultList = setUpResultList();
 
 utilizationService = mock(UtilizationService.class);
 when(utilizationService.getFileCounts()).thenCallRealMethod();
 when(utilizationService.getDao()).thenReturn(fileCountBySizeDao);
 when(fileCountBySizeDao.findAll()).thenReturn(resultList);
 
-utilizationService.getFileCounts();
+Response response = utilizationService.getFileCounts();
+// get result list from Response entity
+List responseList =
+(List) response.getEntity();
+
 verify(utilizationService, times(1)).getFileCounts();
 
 Review comment:
   Why are we verifying the actual method call? Method calls verification is 
generally used for mocked methods (So that we know the code path went through 
that). 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291646)
Time Spent: 10h 20m  (was: 10h 10m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=291644=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291644
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:16
Start Date: 08/Aug/19 23:16
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1146: HDDS-1366. 
Add ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r312268532
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -38,6 +40,9 @@
 import java.util.Iterator;
 import java.util.List;
 
+import static org.apache.hadoop.utils.BatchOperation.Operation.DELETE;
 
 Review comment:
   Let's use 
org.apache.hadoop.ozone.recon.tasks.OMDBUpdateEvent.OMDBUpdateAction to keep it 
consistent. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291644)
Time Spent: 10h 10m  (was: 10h)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=291641=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291641
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:15
Start Date: 08/Aug/19 23:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1257: HDDS-1913. Fix 
OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257#issuecomment-519720318
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 85 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 642 | trunk passed |
   | +1 | compile | 391 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1059 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 238 | trunk passed |
   | 0 | spotbugs | 539 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 807 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 655 | the patch passed |
   | +1 | compile | 453 | the patch passed |
   | +1 | cc | 453 | the patch passed |
   | +1 | javac | 453 | the patch passed |
   | +1 | checkstyle | 109 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 868 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 205 | the patch passed |
   | -1 | findbugs | 551 | hadoop-ozone generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 395 | hadoop-hdds in the patch failed. |
   | -1 | unit | 3658 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 10802 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Unread field:OzoneBucket.java:[line 145] |
   |  |  Unwritten field:BucketArgs.java:[line 88] |
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1257 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 3b2f9bdfcadc 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 5840df8 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/1/testReport/ |
   | Max. process+thread count | 3877 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1257/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an 

[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291640=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291640
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:13
Start Date: 08/Aug/19 23:13
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1218: HDDS-1891. 
Ozone fs shell command should work with default port when port number is not 
specified
URL: https://github.com/apache/hadoop/pull/1218#discussion_r312279650
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -113,11 +115,14 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 String omPort = String.valueOf(-1);
 if (!isEmpty(remaining)) {
   String[] parts = remaining.split(":");
-  if (parts.length != 2) {
+  // Array length should only be 1 or 2
+  if (parts.length > 2) {
 throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
 
 Review comment:
   On it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291640)
Time Spent: 1.5h  (was: 1h 20m)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291639
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 08/Aug/19 23:10
Start Date: 08/Aug/19 23:10
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1218: HDDS-1891. 
Ozone fs shell command should work with default port when port number is not 
specified
URL: https://github.com/apache/hadoop/pull/1218#discussion_r312278971
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -113,11 +115,14 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 String omPort = String.valueOf(-1);
 if (!isEmpty(remaining)) {
   String[] parts = remaining.split(":");
-  if (parts.length != 2) {
+  // Array length should only be 1 or 2
+  if (parts.length > 2) {
 throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
   }
   omHost = parts[0];
-  omPort = parts[1];
+  // If port number is not specified, try default OM port
+  omPort = parts.length == 2 ?
+  parts[1] : String.valueOf(OZONE_OM_PORT_DEFAULT);
 
 Review comment:
   Updated. I'm using `OmUtils.getOmRpcPort(conf)` instead.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291639)
Time Spent: 1h 20m  (was: 1h 10m)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14682) RBF: TestStateStoreFileSystem failed because /tmp/hadoop/dfs/name/current can't be removed

2019-08-08 Thread Nikhil Navadiya (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikhil Navadiya updated HDFS-14682:
---
Attachment: HDFS-14682.002.patch

> RBF: TestStateStoreFileSystem failed because /tmp/hadoop/dfs/name/current 
> can't be removed
> --
>
> Key: HDFS-14682
> URL: https://issues.apache.org/jira/browse/HDFS-14682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Wei-Chiu Chuang
>Assignee: Nikhil Navadiya
>Priority: Minor
> Attachments: HDFS-14682.001.patch, HDFS-14682.002.patch
>
>
> I happen to have /tmp/hadoop owned by root, and TestStateStoreFileSystem 
> failed to delete this directory. 
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.156 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem
> [ERROR] 
> org.apache.hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem
>   Time elapsed: 1.153 s  <<< ERROR!
> java.io.IOException: Cannot remove current directory: 
> /tmp/hadoop/dfs/name/current
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:358)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:571)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:592)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403)
> at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1065)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:986)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:516)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:475)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem.setupCluster(TestStateStoreFileSystem.java:48)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1863) Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903394#comment-16903394
 ] 

Hudson commented on HDDS-1863:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17067 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17067/])
HDDS-1863. Freon RandomKeyGenerator even if keySize is set to 0, it (github: 
rev aa5f445fb9d06f9967aadf305fa3cd509a16b982)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestRandomKeyGenerator.java


> Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
> data to key
> 
>
> Key: HDDS-1863
> URL: https://issues.apache.org/jira/browse/HDDS-1863
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
>  
> {code:java}
> ***
> Status: Success
> Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
> Number of Volumes created: 1
> Number of Buckets created: 1
> Number of Keys added: 1
> Ratis replication factor: THREE
> Ratis replication type: STAND_ALONE
> Average Time spent in volume creation: 00:00:00,002
> Average Time spent in bucket creation: 00:00:00,000
> Average Time spent in key creation: 00:00:00,002
> Average Time spent in key write: 00:00:00,101
> Total bytes written: 0
> Total Execution time: 00:00:05,699
>  
> {code}
> ***
> [root@ozoneha-2 ozone-0.5.0-SNAPSHOT]# bin/ozone sh key list 
> /vol-0-28271/bucket-0-95211
> [
> {   "version" : 0,   "md5hash" : null,   "createdOn" : "Fri, 26 Jul 2019 
> 01:02:08 GMT",   "modifiedOn" : "Fri, 26 Jul 2019 01:02:09 GMT",   "size" : 
> 36,   "keyName" : "key-0-98235",   "type" : null }
> ]
>  
> This is because of the below code in RandomKeyGenerator:
> {code:java}
> for (long nrRemaining = keySize - randomValue.length;
>  nrRemaining > 0; nrRemaining -= bufferSize) {
>  int curSize = (int) Math.min(bufferSize, nrRemaining);
>  os.write(keyValueBuffer, 0, curSize);
> }
> os.write(randomValue);
> os.close();{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1863) Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1863?focusedWorklogId=291621=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291621
 ]

ASF GitHub Bot logged work on HDDS-1863:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:40
Start Date: 08/Aug/19 22:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1167: 
HDDS-1863. Freon RandomKeyGenerator even if keySize is set to 0, it returns 
some random data to key.
URL: https://github.com/apache/hadoop/pull/1167
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291621)
Time Spent: 4h  (was: 3h 50m)

> Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
> data to key
> 
>
> Key: HDDS-1863
> URL: https://issues.apache.org/jira/browse/HDDS-1863
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
>  
> {code:java}
> ***
> Status: Success
> Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
> Number of Volumes created: 1
> Number of Buckets created: 1
> Number of Keys added: 1
> Ratis replication factor: THREE
> Ratis replication type: STAND_ALONE
> Average Time spent in volume creation: 00:00:00,002
> Average Time spent in bucket creation: 00:00:00,000
> Average Time spent in key creation: 00:00:00,002
> Average Time spent in key write: 00:00:00,101
> Total bytes written: 0
> Total Execution time: 00:00:05,699
>  
> {code}
> ***
> [root@ozoneha-2 ozone-0.5.0-SNAPSHOT]# bin/ozone sh key list 
> /vol-0-28271/bucket-0-95211
> [
> {   "version" : 0,   "md5hash" : null,   "createdOn" : "Fri, 26 Jul 2019 
> 01:02:08 GMT",   "modifiedOn" : "Fri, 26 Jul 2019 01:02:09 GMT",   "size" : 
> 36,   "keyName" : "key-0-98235",   "type" : null }
> ]
>  
> This is because of the below code in RandomKeyGenerator:
> {code:java}
> for (long nrRemaining = keySize - randomValue.length;
>  nrRemaining > 0; nrRemaining -= bufferSize) {
>  int curSize = (int) Math.min(bufferSize, nrRemaining);
>  os.write(keyValueBuffer, 0, curSize);
> }
> os.write(randomValue);
> os.close();{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1863) Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key

2019-08-08 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1863:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
> data to key
> 
>
> Key: HDDS-1863
> URL: https://issues.apache.org/jira/browse/HDDS-1863
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
>  
> {code:java}
> ***
> Status: Success
> Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
> Number of Volumes created: 1
> Number of Buckets created: 1
> Number of Keys added: 1
> Ratis replication factor: THREE
> Ratis replication type: STAND_ALONE
> Average Time spent in volume creation: 00:00:00,002
> Average Time spent in bucket creation: 00:00:00,000
> Average Time spent in key creation: 00:00:00,002
> Average Time spent in key write: 00:00:00,101
> Total bytes written: 0
> Total Execution time: 00:00:05,699
>  
> {code}
> ***
> [root@ozoneha-2 ozone-0.5.0-SNAPSHOT]# bin/ozone sh key list 
> /vol-0-28271/bucket-0-95211
> [
> {   "version" : 0,   "md5hash" : null,   "createdOn" : "Fri, 26 Jul 2019 
> 01:02:08 GMT",   "modifiedOn" : "Fri, 26 Jul 2019 01:02:09 GMT",   "size" : 
> 36,   "keyName" : "key-0-98235",   "type" : null }
> ]
>  
> This is because of the below code in RandomKeyGenerator:
> {code:java}
> for (long nrRemaining = keySize - randomValue.length;
>  nrRemaining > 0; nrRemaining -= bufferSize) {
>  int curSize = (int) Math.min(bufferSize, nrRemaining);
>  os.write(keyValueBuffer, 0, curSize);
> }
> os.write(randomValue);
> os.close();{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1863) Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1863?focusedWorklogId=291620=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291620
 ]

ASF GitHub Bot logged work on HDDS-1863:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:40
Start Date: 08/Aug/19 22:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1167: HDDS-1863. 
Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
data to key.
URL: https://github.com/apache/hadoop/pull/1167#issuecomment-519712984
 
 
   Thank You @xiaoyuyao and @arp7 for the review.
   Test failures are not related to this patch. I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291620)
Time Spent: 3h 50m  (was: 3h 40m)

> Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
> data to key
> 
>
> Key: HDDS-1863
> URL: https://issues.apache.org/jira/browse/HDDS-1863
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
>  
> {code:java}
> ***
> Status: Success
> Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
> Number of Volumes created: 1
> Number of Buckets created: 1
> Number of Keys added: 1
> Ratis replication factor: THREE
> Ratis replication type: STAND_ALONE
> Average Time spent in volume creation: 00:00:00,002
> Average Time spent in bucket creation: 00:00:00,000
> Average Time spent in key creation: 00:00:00,002
> Average Time spent in key write: 00:00:00,101
> Total bytes written: 0
> Total Execution time: 00:00:05,699
>  
> {code}
> ***
> [root@ozoneha-2 ozone-0.5.0-SNAPSHOT]# bin/ozone sh key list 
> /vol-0-28271/bucket-0-95211
> [
> {   "version" : 0,   "md5hash" : null,   "createdOn" : "Fri, 26 Jul 2019 
> 01:02:08 GMT",   "modifiedOn" : "Fri, 26 Jul 2019 01:02:09 GMT",   "size" : 
> 36,   "keyName" : "key-0-98235",   "type" : null }
> ]
>  
> This is because of the below code in RandomKeyGenerator:
> {code:java}
> for (long nrRemaining = keySize - randomValue.length;
>  nrRemaining > 0; nrRemaining -= bufferSize) {
>  int curSize = (int) Math.min(bufferSize, nrRemaining);
>  os.write(keyValueBuffer, 0, curSize);
> }
> os.write(randomValue);
> os.close();{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291619
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:39
Start Date: 08/Aug/19 22:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519712725
 
 
   Thank You @arp7 and @xiaoyuyao for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291619)
Time Spent: 8h 40m  (was: 8.5h)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291618=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291618
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:38
Start Date: 08/Aug/19 22:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519712725
 
 
   Thank You @arp7 and @xiaoyuyao for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291618)
Time Spent: 8.5h  (was: 8h 20m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291615=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291615
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:33
Start Date: 08/Aug/19 22:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519668220
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for branch |
   | +1 | mvninstall | 625 | trunk passed |
   | +1 | compile | 410 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 955 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 480 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 699 | trunk passed |
   | -0 | patch | 518 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 593 | the patch passed |
   | +1 | compile | 393 | the patch passed |
   | +1 | cc | 393 | the patch passed |
   | +1 | javac | 393 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   | +1 | findbugs | 727 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 361 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2061 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8374 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 2d7d04afcdc6 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3ac0f3a |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/17/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/17/testReport/ |
   | Max. process+thread count | 4446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/17/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291615)
Time Spent: 8h 20m  (was: 8h 10m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: 

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291612=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291612
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:32
Start Date: 08/Aug/19 22:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519347577
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 78 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 640 | trunk passed |
   | +1 | compile | 374 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 981 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   | 0 | spotbugs | 454 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 671 | trunk passed |
   | -0 | patch | 494 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 630 | the patch passed |
   | +1 | compile | 395 | the patch passed |
   | +1 | cc | 395 | the patch passed |
   | +1 | javac | 395 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 752 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 374 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2127 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 8586 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux e3d3a3a29049 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/14/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/14/testReport/ |
   | Max. process+thread count | 5345 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291612)
Time Spent: 7h 50m  (was: 7h 40m)

> Support Bucket ACL operations for OM 

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291613=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291613
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:32
Start Date: 08/Aug/19 22:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519476711
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for branch |
   | +1 | mvninstall | 603 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 790 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   | 0 | spotbugs | 426 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 621 | trunk passed |
   | -0 | patch | 462 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 532 | the patch passed |
   | +1 | compile | 358 | the patch passed |
   | +1 | cc | 358 | the patch passed |
   | +1 | javac | 358 | the patch passed |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 626 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 83 | hadoop-ozone generated 9 new + 13 unchanged - 0 fixed 
= 22 total (was 13) |
   | +1 | findbugs | 642 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 296 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1308 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6811 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 83679aa65de7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 00b5a27 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/15/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/15/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/15/testReport/ |
   | Max. process+thread count | 3756 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291613)
Time Spent: 8h  (was: 7h 50m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291614=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291614
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:32
Start Date: 08/Aug/19 22:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519666099
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 141 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 797 | trunk passed |
   | +1 | compile | 450 | trunk passed |
   | +1 | checkstyle | 93 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1028 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 457 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 668 | trunk passed |
   | -0 | patch | 506 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 592 | the patch passed |
   | +1 | compile | 391 | the patch passed |
   | +1 | cc | 391 | the patch passed |
   | +1 | javac | 391 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 740 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 697 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 359 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2088 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8715 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 2824d3171f0b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3ac0f3a |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/16/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/16/testReport/ |
   | Max. process+thread count | 5281 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291614)
Time Spent: 8h 10m  (was: 8h)

> Support Bucket ACL operations for OM HA.
> 

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291611=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291611
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:32
Start Date: 08/Aug/19 22:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519259762
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 35 | Maven dependency ordering for branch |
   | +1 | mvninstall | 588 | trunk passed |
   | +1 | compile | 345 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 780 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 423 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 615 | trunk passed |
   | -0 | patch | 467 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 526 | the patch passed |
   | +1 | compile | 358 | the patch passed |
   | +1 | cc | 358 | the patch passed |
   | +1 | javac | 358 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 648 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 621 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 299 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2900 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 8428 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 7a82cc065963 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 827dbb1 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/13/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/13/testReport/ |
   | Max. process+thread count | 3690 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291611)
Time Spent: 7h 40m  (was: 7.5h)

> Support Bucket ACL 

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291609=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291609
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 22:29
Start Date: 08/Aug/19 22:29
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519710636
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291609)
Time Spent: 7.5h  (was: 7h 20m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: https://issues.apache.org/jira/browse/HDDS-1884
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> -HDDS-15+40+- adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2019-08-08 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903374#comment-16903374
 ] 

Konstantin Shvachko edited comment on HDFS-14703 at 8/8/19 10:10 PM:
-

Hi [~hexiaoqiao], thanks for reviewing the doc. Very good questions:
# "Cousins" means files like {{/a/b/c/d}} and {{/a/b/m/n}}. They will have 
keys, respectively, {{}} and {{}}, which have 
common prefix {{}} and therefore are likely to fall into the same 
RangeGSet. In your example {{}} is the parent of {{}} and this key definition does not guarantee them to be in the same range.
# Deleting a directory {{/a/b/c}} means deleting the entire sub-tree underneath 
this directory. We should lock all RangeGSets involved in such deletion, 
particularly the one containing file {{f}}. So {{f}} cannot be modified 
concurrently with the delete.
# Just to clarify RangeMap is the upper level part of PartitionedGSet, which 
maps key ranges into RangeGSets. So there is only one RangeMap and many 
RangeGSets. Holding a lock on RangeMap is akin to holding a global lock. You 
make a good point that some operations like failover, large deletes, renames, 
quota changes will still require a global lock. The lock on RangeMap could play 
the role of such global lock. This should be defined in more details within the 
design of LatchLock. Ideally we should retain FSNamesystemLock as a global lock 
for some operations. This will also help us gradually switch operations from 
FSNamesystemLock to LatchLock.
# I don't know what the next bottleneck we will see, but you are absolutely 
correct there will be something. For edits log, I indeed saw while running my 
benchmarks that the number of transactions batched together while journaling 
was increasing. This is expected and desirable behavior, since writing large 
batches to a disk is more efficient than lots of small writes.


was (Author: shv):
Hi [~hexiaoqiao], thanks for reviewing the doc. Very good questions:
# "Cousins" means files like {{/a/b/c/d}} and {{/a/b/m/n}}. They will have 
keys, respectively, {{}} and {{}}, which have 
common prefix {{}} and therefore are likely to fall into the same 
RangeGSet. In your example {{}} is the parent of {{}} and this key definition does not guarantee them to be in the same range.
# Deleting a directory {{/a/b/c}} means deleting the entire sub-tree underneath 
this directory. We should lock all RangeGSets involved in such deletion, 
particularly the one containing containing file {{f}}. So {{f}} cannot be 
modified concurrently with the delete.
# Just to clarify RangeMap is the upper level part of PartitionedGSet, which 
maps key ranges into RangeGSets. So there is only one RangeMap and many 
RangeGSets. Holding a lock on RangeMap is akin to holding a global lock. You 
make a good point that some operations like failover, large deletes, renames, 
quota changes will still require a global lock. The lock on RangeMap could play 
the role of such global lock. This should be defined in more details within the 
design of LatchLock. Ideally we should retain FSNamesystemLock as a global lock 
for some operations. This will also help us gradually switch operations from 
FSNamesystemLock to LatchLock.
# I don't know what the next bottleneck we will see, but you are absolutely 
correct there will be something. For edits log, I indeed saw while running my 
benchmarks that the number of transactions batched together while journaling 
was increasing. This is expected and desirable behavior, since writing large 
batches to a disk is more efficient than lots of small writes.

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: NameNode Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2019-08-08 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903374#comment-16903374
 ] 

Konstantin Shvachko commented on HDFS-14703:


Hi [~hexiaoqiao], thanks for reviewing the doc. Very good questions:
# "Cousins" means files like {{/a/b/c/d}} and {{/a/b/m/n}}. They will have 
keys, respectively, {{}} and {{}}, which have 
common prefix {{}} and therefore are likely to fall into the same 
RangeGSet. In your example {{}} is the parent of {{}} and this key definition does not guarantee them to be in the same range.
# Deleting a directory {{/a/b/c}} means deleting the entire sub-tree underneath 
this directory. We should lock all RangeGSets involved in such deletion, 
particularly the one containing containing file {{f}}. So {{f}} cannot be 
modified concurrently with the delete.
# Just to clarify RangeMap is the upper level part of PartitionedGSet, which 
maps key ranges into RangeGSets. So there is only one RangeMap and many 
RangeGSets. Holding a lock on RangeMap is akin to holding a global lock. You 
make a good point that some operations like failover, large deletes, renames, 
quota changes will still require a global lock. The lock on RangeMap could play 
the role of such global lock. This should be defined in more details within the 
design of LatchLock. Ideally we should retain FSNamesystemLock as a global lock 
for some operations. This will also help us gradually switch operations from 
FSNamesystemLock to LatchLock.
# I don't know what the next bottleneck we will see, but you are absolutely 
correct there will be something. For edits log, I indeed saw while running my 
benchmarks that the number of transactions batched together while journaling 
was increasing. This is expected and desirable behavior, since writing large 
batches to a disk is more efficient than lots of small writes.

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: NameNode Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14696) Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a Util class)

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14696.

   Resolution: Fixed
Fix Version/s: 2.10.0

Merge the PR, resolve this jira. Thanks [~smeng]

> Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a 
> Util class)
> 
>
> Key: HDFS-14696
> URL: https://issues.apache.org/jira/browse/HDFS-14696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HDFS-14696-branch-2.003.patch
>
>
> Backporting HDFS-11273 Move TransferFsImage#doGetUrl function to a Util class 
> to branch-2.
> To avoid confusion with branch-2 patches in HDFS-11273, patch revision number 
> will continue from 003.
> *HDFS-14696-branch-2.003.patch* is the same as 
> *HDFS-11273-branch-2.003.patch*.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903370#comment-16903370
 ] 

Hadoop QA commented on HDFS-14204:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
7s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
56s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
41s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
19s{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  2m 19s{color} | 
{color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 19s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 16s{color} 
| {color:red} root-jdk1.8.0_222 with JDK v1.8.0_222 generated 1 new + 1345 
unchanged - 1 fixed = 1346 total (was 1346) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 27s{color} | {color:orange} root: The patch generated 36 new + 3143 
unchanged - 15 fixed = 3179 total (was 3158) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
17s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95 with JDK 
v1.7.0_95 generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_222 with JDK 
v1.8.0_222 generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Work logged] (HDDS-1863) Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1863?focusedWorklogId=291593=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291593
 ]

ASF GitHub Bot logged work on HDDS-1863:


Author: ASF GitHub Bot
Created on: 08/Aug/19 21:51
Start Date: 08/Aug/19 21:51
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1167: HDDS-1863. 
Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
data to key.
URL: https://github.com/apache/hadoop/pull/1167#discussion_r312258419
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -263,9 +262,7 @@ public Void call() throws Exception {
 // Compute the common initial digest for all keys without their UUID
 if (validateWrites) {
   commonInitialMD = DigestUtils.getDigest(DIGEST_ALGORITHM);
-  int uuidLength = UUID.randomUUID().toString().length();
-  keySize = Math.max(uuidLength, keySize);
-  for (long nrRemaining = keySize - uuidLength; nrRemaining > 0;
+  for (long nrRemaining = keySize; nrRemaining > 0;
 
 Review comment:
   LGTM, +1. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291593)
Time Spent: 3h 40m  (was: 3.5h)

> Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
> data to key
> 
>
> Key: HDDS-1863
> URL: https://issues.apache.org/jira/browse/HDDS-1863
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
>  
> {code:java}
> ***
> Status: Success
> Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
> Number of Volumes created: 1
> Number of Buckets created: 1
> Number of Keys added: 1
> Ratis replication factor: THREE
> Ratis replication type: STAND_ALONE
> Average Time spent in volume creation: 00:00:00,002
> Average Time spent in bucket creation: 00:00:00,000
> Average Time spent in key creation: 00:00:00,002
> Average Time spent in key write: 00:00:00,101
> Total bytes written: 0
> Total Execution time: 00:00:05,699
>  
> {code}
> ***
> [root@ozoneha-2 ozone-0.5.0-SNAPSHOT]# bin/ozone sh key list 
> /vol-0-28271/bucket-0-95211
> [
> {   "version" : 0,   "md5hash" : null,   "createdOn" : "Fri, 26 Jul 2019 
> 01:02:08 GMT",   "modifiedOn" : "Fri, 26 Jul 2019 01:02:09 GMT",   "size" : 
> 36,   "keyName" : "key-0-98235",   "type" : null }
> ]
>  
> This is because of the below code in RandomKeyGenerator:
> {code:java}
> for (long nrRemaining = keySize - randomValue.length;
>  nrRemaining > 0; nrRemaining -= bufferSize) {
>  int curSize = (int) Math.min(bufferSize, nrRemaining);
>  os.write(keyValueBuffer, 0, curSize);
> }
> os.write(randomValue);
> os.close();{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-08 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903359#comment-16903359
 ] 

Stephen O'Donnell commented on HDFS-14706:
--

Uploaded an initial patch to see if it breaks any existing tests. This change 
still needs some tests to prove these changes are OK, as I have only tested 
manually so far.

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14706.001.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14042) Fix NPE when PROVIDED storage is missing

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14042:
---
Fix Version/s: 3.1.2
   3.2.1

> Fix NPE when PROVIDED storage is missing
> 
>
> Key: HDFS-14042
> URL: https://issues.apache.org/jira/browse/HDFS-14042
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14042.001.patch, HDFS-14042.002.patch
>
>
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateStorageStats(DatanodeDescriptor.java:460)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateHeartbeatState(DatanodeDescriptor.java:390)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager.updateLifeline(HeartbeatManager.java:254)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.handleLifeline(DatanodeManager.java:1789)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.handleLifeline(FSNamesystem.java:3997)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.sendLifeline(NameNodeRpcServer.java:1666)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeLifelineProtocolServerSideTranslatorPB.sendLifeline(DatanodeLifelineProtocolServerSideTranslatorPB.java:62)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeLifelineProtocolProtos$DatanodeLifelineProtocolService$2.callBlockingMethod(DatanodeLifelineProtocolProtos.java:409)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:898)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:844)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2727)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-08 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-14706:
-
Attachment: HDFS-14706.001.patch

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14706.001.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-08 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-14706:
-
Status: Patch Available  (was: Open)

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14706.001.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-08 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903334#comment-16903334
 ] 

Konstantin Shvachko commented on HDFS-14204:


Hey [~vagarychen] v6 patch doesn't look right, because some Observer and 
AlignmentContext tests are failing. Please take a look. 

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch, 
> HDFS-14204-branch-2.006.patch, HDFS-14204-branch-2.007.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=291576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291576
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 08/Aug/19 21:06
Start Date: 08/Aug/19 21:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1218: HDDS-1891. 
Ozone fs shell command should work with default port when port number is not 
specified
URL: https://github.com/apache/hadoop/pull/1218#issuecomment-519687588
 
 
   > One more comment, I think we should make a similar change in 
BasicOzoneFileSystem.java.
   > 
   > See, can we have a some utility method, which can be used across 2 classes.
   
   OzoneFileSystem extends BasicOzoneFileSystem, so this has been taken care. 
Just test it out locally.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291576)
Time Spent: 1h 10m  (was: 1h)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14693:
---
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Thanks! Pushed the patch to trunk, branch-3.2 and branch-3.1

> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14693.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
> Unfortunately, the above error message can not help us quickly find the root 
> cause, It took more time to find the cause, so it's better to add a warning 
> log here, like this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14459) ClosedChannelException silently ignored in FsVolumeList.addBlockPool()

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903322#comment-16903322
 ] 

Hudson commented on HDFS-14459:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17066 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17066/])
HDFS-14459. ClosedChannelException silently ignored in (weichiu: rev 
b0799148cf6e92be540f5665bb571418b916d789)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestAddBlockPoolException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/AddBlockPoolException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java


> ClosedChannelException silently ignored in FsVolumeList.addBlockPool()
> --
>
> Key: HDFS-14459
> URL: https://issues.apache.org/jira/browse/HDFS-14459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14459.001.patch, HDFS-14459.002.patch, 
> HDFS-14459.003.patch
>
>
> Following on HDFS-14333, I encountered another scenario when a volume has 
> some sort of disk level errors it can silently fail to have the blockpool 
> added to itself in FsVolumeList.addBlockPool().
> In the logs for a recent issue we see the following pattern:
> {code}
> 2019-04-24 04:21:27,690 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> volume - /CDH/sdi1/dfs/dn/current, StorageType: DISK
> 2019-04-24 04:21:27,691 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> new volume: DS-694ae931-8a4e-42d5-b2b3-d946e35c6b47
> ...
> 2019-04-24 04:21:27,703 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-936404344-xxx-1426594942733 on volume 
> /CDH/sdi1/dfs/dn/current...
> ...
>  2019-04-24 04:21:27,722 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-936404344-xxx-1426594942733 on 
> /CDH/sdi1/dfs/dn/current: 19ms
> >
> ...
> 2019-04-24 04:21:29,871 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding 
> replicas to map for block pool BP-936404344-xxx-1426594942733 on volume 
> /CDH/sdi1/dfs/dn/current...
> ...
> 2019-04-24 04:21:29,872 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Caught 
> exception while adding replicas from /CDH/sdi1/dfs/dn/current. Will throw 
> later.
> java.io.IOException: block pool BP-936404344-10.7.192.215-1426594942733 is 
> not found
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getBlockPoolSlice(FsVolumeImpl.java:407)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191
> {code}
> The notable point, is that the 'scanning block pool' step must not have 
> completed properly for this volume but nothing was logged and then the 
> slightly confusing error is logged when attempting to add the replicas. That 
> error occurs as the block pool was not added to the volume by the 
> addBlockPool step.
> The relevant part of the code in 'addBlockPool()' from current trunk looks 
> like:
> {code}
> for (final FsVolumeImpl v : volumes) {
>   Thread t = new Thread() {
> public void run() {
>   try (FsVolumeReference ref = v.obtainReference()) {
> FsDatasetImpl.LOG.info("Scanning block pool " + bpid +
> " on volume " + v + "...");
> long startTime = Time.monotonicNow();
> v.addBlockPool(bpid, conf);
> long timeTaken = Time.monotonicNow() - startTime;
> FsDatasetImpl.LOG.info("Time taken to scan block pool " + bpid +
> " on " + v + ": " + timeTaken + "ms");
>   } catch (ClosedChannelException e) {
> // ignore.
>   } catch (IOException ioe) {
> FsDatasetImpl.LOG.info("Caught exception while scanning " + v +
> ". Will throw later.", ioe);
> unhealthyDataDirs.put(v, ioe);
>   }
> }
>   };
>   blockPoolAddingThreads.add(t);
>   t.start();
> }
> {code}
> As we get the first log message (Scanning block pool ... ), but not the 
> 

[jira] [Commented] (HDFS-14701) Change Log Level to warn in SlotReleaser

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903325#comment-16903325
 ] 

Hudson commented on HDFS-14701:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17066 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17066/])
HDFS-14701. Change Log Level to warn in SlotReleaser. Contributed by (weichiu: 
rev 28a848412c8239dfc6bd3e42dbbfe711e19bc8eb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java


> Change Log Level to warn in SlotReleaser
> 
>
> Key: HDFS-14701
> URL: https://issues.apache.org/jira/browse/HDFS-14701
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14701.001.patch, HDFS-14701.002.patch
>
>
>  if the corresponding DataNode has been stopped or restarted and DFSClient 
> close shared memory segment,releaseShortCircuitFds API throw expection and 
> log a ERROR Message. I think it should not be a ERROR log,and that log a warn 
> log is more reasonable.
> {code:java}
> // @Override
> public void run() {
>   LOG.trace("{}: about to release {}", ShortCircuitCache.this, slot);
>   final DfsClientShm shm = (DfsClientShm)slot.getShm();
>   final DomainSocket shmSock = shm.getPeer().getDomainSocket();
>   final String path = shmSock.getPath();
>   boolean success = false;
>   try (DomainSocket sock = DomainSocket.connect(path);
>DataOutputStream out = new DataOutputStream(
>new BufferedOutputStream(sock.getOutputStream( {
> new Sender(out).releaseShortCircuitFds(slot.getSlotId());
> DataInputStream in = new DataInputStream(sock.getInputStream());
> ReleaseShortCircuitAccessResponseProto resp =
> ReleaseShortCircuitAccessResponseProto.parseFrom(
> PBHelperClient.vintPrefixed(in));
> if (resp.getStatus() != Status.SUCCESS) {
>   String error = resp.hasError() ? resp.getError() : "(unknown)";
>   throw new IOException(resp.getStatus().toString() + ": " + error);
> }
> LOG.trace("{}: released {}", this, slot);
> success = true;
>   } catch (IOException e) {
> LOG.error(ShortCircuitCache.this + ": failed to release " +
> "short-circuit shared memory slot " + slot + " by sending " +
> "ReleaseShortCircuitAccessRequestProto to " + path +
> ".  Closing shared memory segment.", e);
>   } finally {
> if (success) {
>   shmManager.freeSlot(slot);
> } else {
>   shm.getEndpointShmManager().shutdown(shm);
> }
>   }
> }
> {code}
>  *exception stack:*
> {code:java}
> 2019-08-05,15:28:03,838 ERROR [ShortCircuitCache_SlotReleaser] 
> org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache: 
> ShortCircuitCache(0x65849546): failed to release short-circuit shared memory 
> slot Slot(slotIdx=62, shm=DfsClientShm(70593ef8b3d84cba3c2f0a1e81377eb1)) by 
> sending ReleaseShortCircuitAccessRequestProto to 
> /home/work/app/hdfs/c3micloudsrv-hdd/datanode/dn_socket.  Closing shared 
> memory segment.
> java.io.IOException: ERROR_INVALID: there is no shared memory segment 
> registered with shmId 70593ef8b3d84cba3c2f0a1e81377eb1
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903323#comment-16903323
 ] 

Hudson commented on HDDS-1829:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17066 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17066/])
HDDS-1829 On OM reload/restart OmMetrics#numKeys should be updated. (arp7: rev 
14a4ce3cee7aa9e4194ef5de8169c92c2565de65)
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestRDBTableStore.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Table.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBTable.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestTypedRDBTableStore.java


> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14662) Document the usage of the new Balancer "asService" parameter

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903324#comment-16903324
 ] 

Hudson commented on HDFS-14662:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17066 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17066/])
HDFS-14662. Document the usage of the new Balancer "asService" (weichiu: rev 
23f91f68b817b59d966156edd0b1171155c07742)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsUserGuide.md
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


> Document the usage of the new Balancer "asService" parameter
> 
>
> Key: HDFS-14662
> URL: https://issues.apache.org/jira/browse/HDFS-14662
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14662.001.patch, HDFS-14662.002.patch, 
> HDFS-14662.003.patch
>
>
> see HDFS-13783, this jira add document for how to run balancer as a long 
> service



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14705) Remove unused configuration dfs.min.replication

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903326#comment-16903326
 ] 

Hudson commented on HDFS-14705:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17066 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17066/])
HDFS-14705. Remove unused configuration dfs.min.replication. Contributed 
(weichiu: rev 2265872c2db98fbaf0cd847af6d12cd4bc76e9b2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java


> Remove unused configuration dfs.min.replication
> ---
>
> Key: HDFS-14705
> URL: https://issues.apache.org/jira/browse/HDFS-14705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: CR Hota
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14705.001.patch
>
>
> A few HDFS tests sets a configuration property dfs.min.replication. This is 
> not being used anywhere in the code. It doesn't seem like a leftover from 
> legacy code either. Better to clean them out. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14693) NameNode should log a warning when EditLog IPC logger's pending size exceeds limit.

2019-08-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903327#comment-16903327
 ] 

Hudson commented on HDFS-14693:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17066 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17066/])
HDFS-14693. NameNode should log a warning when EditLog IPC logger's (weichiu: 
rev 6ad9a11494c3aea146d7741bf0ad52ce16ad08e6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java


> NameNode should log a warning when EditLog IPC logger's pending size exceeds 
> limit.
> ---
>
> Key: HDFS-14693
> URL: https://issues.apache.org/jira/browse/HDFS-14693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-14693.001.patch
>
>
> In a production environment, there may be some differences in each 
> JouranlNode (e.g. network condition, disk condition, and so on). For example, 
> If a JN's network is much worse than other JNs, then the time taken by the NN 
> to write this JN will be much greater than other JNs, in this case, it will 
> cause the IPC Logger thread corresponding to this JN to have many pending 
> edits, when the pending edits exceeds the maximum limit (default 10MB), the 
> new edits about to write to this JN will be silently dropped, and will result 
> gaps in the editlog segment, which causing this JN and NN repeatedly 
> reporting the following errors: 
> {code:java}
> org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
> write txid 1904164873 expecting nextTxId=1904164871{code}
> Unfortunately, the above error message can not help us quickly find the root 
> cause, It took more time to find the cause, so it's better to add a warning 
> log here, like this: 
> {code:java}
> 2019-08-02 04:55:05,879 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:Pending edits to 
> 192.168.202.13:8485 is going to exceed limit size:10240, current queued edits 
> size:10224, will silently drop 174 bytes of edits!{code}
>  This is just a very small improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14701) Change Log Level to warn in SlotReleaser

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14701:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed 002 to trunk. Thanks [~leosun08] for the patch!

> Change Log Level to warn in SlotReleaser
> 
>
> Key: HDFS-14701
> URL: https://issues.apache.org/jira/browse/HDFS-14701
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14701.001.patch, HDFS-14701.002.patch
>
>
>  if the corresponding DataNode has been stopped or restarted and DFSClient 
> close shared memory segment,releaseShortCircuitFds API throw expection and 
> log a ERROR Message. I think it should not be a ERROR log,and that log a warn 
> log is more reasonable.
> {code:java}
> // @Override
> public void run() {
>   LOG.trace("{}: about to release {}", ShortCircuitCache.this, slot);
>   final DfsClientShm shm = (DfsClientShm)slot.getShm();
>   final DomainSocket shmSock = shm.getPeer().getDomainSocket();
>   final String path = shmSock.getPath();
>   boolean success = false;
>   try (DomainSocket sock = DomainSocket.connect(path);
>DataOutputStream out = new DataOutputStream(
>new BufferedOutputStream(sock.getOutputStream( {
> new Sender(out).releaseShortCircuitFds(slot.getSlotId());
> DataInputStream in = new DataInputStream(sock.getInputStream());
> ReleaseShortCircuitAccessResponseProto resp =
> ReleaseShortCircuitAccessResponseProto.parseFrom(
> PBHelperClient.vintPrefixed(in));
> if (resp.getStatus() != Status.SUCCESS) {
>   String error = resp.hasError() ? resp.getError() : "(unknown)";
>   throw new IOException(resp.getStatus().toString() + ": " + error);
> }
> LOG.trace("{}: released {}", this, slot);
> success = true;
>   } catch (IOException e) {
> LOG.error(ShortCircuitCache.this + ": failed to release " +
> "short-circuit shared memory slot " + slot + " by sending " +
> "ReleaseShortCircuitAccessRequestProto to " + path +
> ".  Closing shared memory segment.", e);
>   } finally {
> if (success) {
>   shmManager.freeSlot(slot);
> } else {
>   shm.getEndpointShmManager().shutdown(shm);
> }
>   }
> }
> {code}
>  *exception stack:*
> {code:java}
> 2019-08-05,15:28:03,838 ERROR [ShortCircuitCache_SlotReleaser] 
> org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache: 
> ShortCircuitCache(0x65849546): failed to release short-circuit shared memory 
> slot Slot(slotIdx=62, shm=DfsClientShm(70593ef8b3d84cba3c2f0a1e81377eb1)) by 
> sending ReleaseShortCircuitAccessRequestProto to 
> /home/work/app/hdfs/c3micloudsrv-hdd/datanode/dn_socket.  Closing shared 
> memory segment.
> java.io.IOException: ERROR_INVALID: there is no shared memory segment 
> registered with shmId 70593ef8b3d84cba3c2f0a1e81377eb1
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14662) Document the usage of the new Balancer "asService" parameter

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14662:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~zhangchen] for contributing the patch and [~xkrogen], [~ayushtkn] for 
helping with the review!

> Document the usage of the new Balancer "asService" parameter
> 
>
> Key: HDFS-14662
> URL: https://issues.apache.org/jira/browse/HDFS-14662
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14662.001.patch, HDFS-14662.002.patch, 
> HDFS-14662.003.patch
>
>
> see HDFS-13783, this jira add document for how to run balancer as a long 
> service



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-08 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1913:

Target Version/s: 0.4.1
Priority: Blocker  (was: Major)

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-1891:
-
Target Version/s: 0.4.1

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14459) ClosedChannelException silently ignored in FsVolumeList.addBlockPool()

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14459:
---
Fix Version/s: 3.1.3
   3.2.1

> ClosedChannelException silently ignored in FsVolumeList.addBlockPool()
> --
>
> Key: HDFS-14459
> URL: https://issues.apache.org/jira/browse/HDFS-14459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14459.001.patch, HDFS-14459.002.patch, 
> HDFS-14459.003.patch
>
>
> Following on HDFS-14333, I encountered another scenario when a volume has 
> some sort of disk level errors it can silently fail to have the blockpool 
> added to itself in FsVolumeList.addBlockPool().
> In the logs for a recent issue we see the following pattern:
> {code}
> 2019-04-24 04:21:27,690 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> volume - /CDH/sdi1/dfs/dn/current, StorageType: DISK
> 2019-04-24 04:21:27,691 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> new volume: DS-694ae931-8a4e-42d5-b2b3-d946e35c6b47
> ...
> 2019-04-24 04:21:27,703 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-936404344-xxx-1426594942733 on volume 
> /CDH/sdi1/dfs/dn/current...
> ...
>  2019-04-24 04:21:27,722 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-936404344-xxx-1426594942733 on 
> /CDH/sdi1/dfs/dn/current: 19ms
> >
> ...
> 2019-04-24 04:21:29,871 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding 
> replicas to map for block pool BP-936404344-xxx-1426594942733 on volume 
> /CDH/sdi1/dfs/dn/current...
> ...
> 2019-04-24 04:21:29,872 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Caught 
> exception while adding replicas from /CDH/sdi1/dfs/dn/current. Will throw 
> later.
> java.io.IOException: block pool BP-936404344-10.7.192.215-1426594942733 is 
> not found
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getBlockPoolSlice(FsVolumeImpl.java:407)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191
> {code}
> The notable point, is that the 'scanning block pool' step must not have 
> completed properly for this volume but nothing was logged and then the 
> slightly confusing error is logged when attempting to add the replicas. That 
> error occurs as the block pool was not added to the volume by the 
> addBlockPool step.
> The relevant part of the code in 'addBlockPool()' from current trunk looks 
> like:
> {code}
> for (final FsVolumeImpl v : volumes) {
>   Thread t = new Thread() {
> public void run() {
>   try (FsVolumeReference ref = v.obtainReference()) {
> FsDatasetImpl.LOG.info("Scanning block pool " + bpid +
> " on volume " + v + "...");
> long startTime = Time.monotonicNow();
> v.addBlockPool(bpid, conf);
> long timeTaken = Time.monotonicNow() - startTime;
> FsDatasetImpl.LOG.info("Time taken to scan block pool " + bpid +
> " on " + v + ": " + timeTaken + "ms");
>   } catch (ClosedChannelException e) {
> // ignore.
>   } catch (IOException ioe) {
> FsDatasetImpl.LOG.info("Caught exception while scanning " + v +
> ". Will throw later.", ioe);
> unhealthyDataDirs.put(v, ioe);
>   }
> }
>   };
>   blockPoolAddingThreads.add(t);
>   t.start();
> }
> {code}
> As we get the first log message (Scanning block pool ... ), but not the 
> second (Time take to scan block pool ...), and we don't get anything logged 
> or an exception thrown, then the operation must have encountered a 
> ClosedChannelException which is silently ignored.
> I am also not sure if we should ignore a ClosedChannelException, as it means 
> the volume failed to add fully. As ClosedChannelException is a subclass of 
> IOException perhaps we can remove that catch block entirely?
> Finally, HDFS-14333 refactored the above code to allow the DN to better 
> handle a disk failure on DN startup. However, if addBlockPool does throw an 
> exception, it will mean getAllVolumesMap() will not get called and the DN 
> will end up partly initialized.
> DataNode.initBlockPool() calls FsDatasetImpl.addBlockPool() which 

[jira] [Updated] (HDFS-14459) ClosedChannelException silently ignored in FsVolumeList.addBlockPool()

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14459:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk, branch-3.2 and branch-3.1
Thanks [~smeng] for the review and [~sodonnell] for contributing the patch!

> ClosedChannelException silently ignored in FsVolumeList.addBlockPool()
> --
>
> Key: HDFS-14459
> URL: https://issues.apache.org/jira/browse/HDFS-14459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14459.001.patch, HDFS-14459.002.patch, 
> HDFS-14459.003.patch
>
>
> Following on HDFS-14333, I encountered another scenario when a volume has 
> some sort of disk level errors it can silently fail to have the blockpool 
> added to itself in FsVolumeList.addBlockPool().
> In the logs for a recent issue we see the following pattern:
> {code}
> 2019-04-24 04:21:27,690 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> volume - /CDH/sdi1/dfs/dn/current, StorageType: DISK
> 2019-04-24 04:21:27,691 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> new volume: DS-694ae931-8a4e-42d5-b2b3-d946e35c6b47
> ...
> 2019-04-24 04:21:27,703 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-936404344-xxx-1426594942733 on volume 
> /CDH/sdi1/dfs/dn/current...
> ...
>  2019-04-24 04:21:27,722 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-936404344-xxx-1426594942733 on 
> /CDH/sdi1/dfs/dn/current: 19ms
> >
> ...
> 2019-04-24 04:21:29,871 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding 
> replicas to map for block pool BP-936404344-xxx-1426594942733 on volume 
> /CDH/sdi1/dfs/dn/current...
> ...
> 2019-04-24 04:21:29,872 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Caught 
> exception while adding replicas from /CDH/sdi1/dfs/dn/current. Will throw 
> later.
> java.io.IOException: block pool BP-936404344-10.7.192.215-1426594942733 is 
> not found
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getBlockPoolSlice(FsVolumeImpl.java:407)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191
> {code}
> The notable point, is that the 'scanning block pool' step must not have 
> completed properly for this volume but nothing was logged and then the 
> slightly confusing error is logged when attempting to add the replicas. That 
> error occurs as the block pool was not added to the volume by the 
> addBlockPool step.
> The relevant part of the code in 'addBlockPool()' from current trunk looks 
> like:
> {code}
> for (final FsVolumeImpl v : volumes) {
>   Thread t = new Thread() {
> public void run() {
>   try (FsVolumeReference ref = v.obtainReference()) {
> FsDatasetImpl.LOG.info("Scanning block pool " + bpid +
> " on volume " + v + "...");
> long startTime = Time.monotonicNow();
> v.addBlockPool(bpid, conf);
> long timeTaken = Time.monotonicNow() - startTime;
> FsDatasetImpl.LOG.info("Time taken to scan block pool " + bpid +
> " on " + v + ": " + timeTaken + "ms");
>   } catch (ClosedChannelException e) {
> // ignore.
>   } catch (IOException ioe) {
> FsDatasetImpl.LOG.info("Caught exception while scanning " + v +
> ". Will throw later.", ioe);
> unhealthyDataDirs.put(v, ioe);
>   }
> }
>   };
>   blockPoolAddingThreads.add(t);
>   t.start();
> }
> {code}
> As we get the first log message (Scanning block pool ... ), but not the 
> second (Time take to scan block pool ...), and we don't get anything logged 
> or an exception thrown, then the operation must have encountered a 
> ClosedChannelException which is silently ignored.
> I am also not sure if we should ignore a ClosedChannelException, as it means 
> the volume failed to add fully. As ClosedChannelException is a subclass of 
> IOException perhaps we can remove that catch block entirely?
> Finally, HDFS-14333 refactored the above code to allow the DN to better 
> handle a disk failure on DN startup. However, if addBlockPool does throw an 
> exception, it will mean 

[jira] [Work logged] (HDDS-1920) Place ozone.om.address config key default value in ozone-site.xml

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1920?focusedWorklogId=291572=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291572
 ]

ASF GitHub Bot logged work on HDDS-1920:


Author: ASF GitHub Bot
Created on: 08/Aug/19 20:42
Start Date: 08/Aug/19 20:42
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1237: HDDS-1920. Place 
ozone.om.address config key default value in ozone-site.xml
URL: https://github.com/apache/hadoop/pull/1237#issuecomment-519680039
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291572)
Time Spent: 40m  (was: 0.5h)

> Place ozone.om.address config key default value in ozone-site.xml
> -
>
> Key: HDDS-1920
> URL: https://issues.apache.org/jira/browse/HDDS-1920
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:xml}
>
>  ozone.om.address
> -
> +0.0.0.0:9862
>  OM, REQUIRED
>  
>The address of the Ozone OM service. This allows clients to discover
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14701) Change Log Level to warn in SlotReleaser

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903313#comment-16903313
 ] 

Wei-Chiu Chuang commented on HDFS-14701:


+1

> Change Log Level to warn in SlotReleaser
> 
>
> Key: HDFS-14701
> URL: https://issues.apache.org/jira/browse/HDFS-14701
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14701.001.patch, HDFS-14701.002.patch
>
>
>  if the corresponding DataNode has been stopped or restarted and DFSClient 
> close shared memory segment,releaseShortCircuitFds API throw expection and 
> log a ERROR Message. I think it should not be a ERROR log,and that log a warn 
> log is more reasonable.
> {code:java}
> // @Override
> public void run() {
>   LOG.trace("{}: about to release {}", ShortCircuitCache.this, slot);
>   final DfsClientShm shm = (DfsClientShm)slot.getShm();
>   final DomainSocket shmSock = shm.getPeer().getDomainSocket();
>   final String path = shmSock.getPath();
>   boolean success = false;
>   try (DomainSocket sock = DomainSocket.connect(path);
>DataOutputStream out = new DataOutputStream(
>new BufferedOutputStream(sock.getOutputStream( {
> new Sender(out).releaseShortCircuitFds(slot.getSlotId());
> DataInputStream in = new DataInputStream(sock.getInputStream());
> ReleaseShortCircuitAccessResponseProto resp =
> ReleaseShortCircuitAccessResponseProto.parseFrom(
> PBHelperClient.vintPrefixed(in));
> if (resp.getStatus() != Status.SUCCESS) {
>   String error = resp.hasError() ? resp.getError() : "(unknown)";
>   throw new IOException(resp.getStatus().toString() + ": " + error);
> }
> LOG.trace("{}: released {}", this, slot);
> success = true;
>   } catch (IOException e) {
> LOG.error(ShortCircuitCache.this + ": failed to release " +
> "short-circuit shared memory slot " + slot + " by sending " +
> "ReleaseShortCircuitAccessRequestProto to " + path +
> ".  Closing shared memory segment.", e);
>   } finally {
> if (success) {
>   shmManager.freeSlot(slot);
> } else {
>   shm.getEndpointShmManager().shutdown(shm);
> }
>   }
> }
> {code}
>  *exception stack:*
> {code:java}
> 2019-08-05,15:28:03,838 ERROR [ShortCircuitCache_SlotReleaser] 
> org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache: 
> ShortCircuitCache(0x65849546): failed to release short-circuit shared memory 
> slot Slot(slotIdx=62, shm=DfsClientShm(70593ef8b3d84cba3c2f0a1e81377eb1)) by 
> sending ReleaseShortCircuitAccessRequestProto to 
> /home/work/app/hdfs/c3micloudsrv-hdd/datanode/dn_socket.  Closing shared 
> memory segment.
> java.io.IOException: ERROR_INVALID: there is no shared memory segment 
> registered with shmId 70593ef8b3d84cba3c2f0a1e81377eb1
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=291567=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291567
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 08/Aug/19 20:38
Start Date: 08/Aug/19 20:38
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1187: HDDS-1829 On OM 
reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1187
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291567)
Time Spent: 5h 20m  (was: 5h 10m)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-08-08 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-1829.
-
   Resolution: Fixed
Fix Version/s: 0.5.0

Committed to trunk. Thanks for the contribution [~smeng].

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=291565=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291565
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 08/Aug/19 20:37
Start Date: 08/Aug/19 20:37
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1187: HDDS-1829 On OM 
reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1187#issuecomment-519678463
 
 
   +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291565)
Time Spent: 5h 10m  (was: 5h)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?focusedWorklogId=291558=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291558
 ]

ASF GitHub Bot logged work on HDDS-1913:


Author: ASF GitHub Bot
Created on: 08/Aug/19 20:08
Start Date: 08/Aug/19 20:08
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1257: 
HDDS-1913. Fix OzoneBucket and RpcClient APIS for acl.
URL: https://github.com/apache/hadoop/pull/1257
 
 
   (cherry picked from commit b60d93b8b500f0ba97c027125401368d22028822)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291558)
Time Spent: 10m
Remaining Estimate: 0h

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1913:
-
Labels: pull-request-available  (was: )

> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291556=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291556
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 20:06
Start Date: 08/Aug/19 20:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519668220
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for branch |
   | +1 | mvninstall | 625 | trunk passed |
   | +1 | compile | 410 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 955 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 480 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 699 | trunk passed |
   | -0 | patch | 518 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 593 | the patch passed |
   | +1 | compile | 393 | the patch passed |
   | +1 | cc | 393 | the patch passed |
   | +1 | javac | 393 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   | +1 | findbugs | 727 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 361 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2061 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8374 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 2d7d04afcdc6 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3ac0f3a |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/17/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/17/testReport/ |
   | Max. process+thread count | 4446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/17/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291556)
Time Spent: 7h 20m  (was: 7h 10m)

> Support Bucket ACL operations for OM HA.
> 
>
> Key: HDDS-1884
> URL: 

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=291546=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291546
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 20:00
Start Date: 08/Aug/19 20:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519666099
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 141 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 797 | trunk passed |
   | +1 | compile | 450 | trunk passed |
   | +1 | checkstyle | 93 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1028 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 457 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 668 | trunk passed |
   | -0 | patch | 506 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 592 | the patch passed |
   | +1 | compile | 391 | the patch passed |
   | +1 | cc | 391 | the patch passed |
   | +1 | javac | 391 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 740 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 697 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 359 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2088 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8715 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 2824d3171f0b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3ac0f3a |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/16/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/16/testReport/ |
   | Max. process+thread count | 5281 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291546)
Time Spent: 7h 10m  (was: 7h)

> Support Bucket ACL operations for OM HA.
> 

  1   2   3   >