[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #2: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-15 Thread GitBox
cxorm commented on a change in pull request #2: HDDS-1737. Add Volume check in 
KeyManager and File Operations.
URL: https://github.com/apache/hadoop-ozone/pull/2#discussion_r335291642
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java
 ##
 @@ -152,6 +152,38 @@ public void testValidateAndUpdateCache() throws Exception 
{
 
   }
 
+  @Test
+  public void testValidateAndUpdateCacheWithVolumeNotFound() throws Exception {
+String volumeName = "vol1";
+String bucketName = "bucket1";
+String keyName = RandomStringUtils.randomAlphabetic(5);
+for (int i =0; i< 3; i++) {
+  keyName += "/" + RandomStringUtils.randomAlphabetic(5);
+}
+
+OMRequest omRequest = createDirectoryRequest(volumeName, bucketName,
+keyName);
+OMDirectoryCreateRequest omDirectoryCreateRequest =
+new OMDirectoryCreateRequest(omRequest);
+
+OMRequest modifiedOmRequest =
+omDirectoryCreateRequest.preExecute(ozoneManager);
+
+omDirectoryCreateRequest = new OMDirectoryCreateRequest(modifiedOmRequest);
+
+OMClientResponse omClientResponse =
+omDirectoryCreateRequest.validateAndUpdateCache(ozoneManager, 100L,
+ozoneManagerDoubleBufferHelper);
+
+Assert.assertTrue(omClientResponse.getOMResponse().getStatus()
+== OzoneManagerProtocolProtos.Status.VOLUME_NOT_FOUND);
+
+// Key should not exist in DB
+Assert.assertTrue(omMetadataManager.getKeyTable().get(
 
 Review comment:
   Thanks @jojochuang  for the review, I am going to fix it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #34: HDDS-2312. Fix typo in ozone command

2019-10-15 Thread GitBox
avijayanhwx commented on issue #34: HDDS-2312. Fix typo in ozone command
URL: https://github.com/apache/hadoop-ozone/pull/34#issuecomment-542531167
 
 
   LGTM +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #35: HDDS-2313. Duplicate release of lock in OMKeyCommitRequest

2019-10-15 Thread GitBox
adoroszlai opened a new pull request #35: HDDS-2313. Duplicate release of lock 
in OMKeyCommitRequest
URL: https://github.com/apache/hadoop-ozone/pull/35
 
 
   ## What changes were proposed in this pull request?
   
   Fix duplicate release of lock (apparently a merge issue, the original change 
(#24) was fine), which causes acceptance test failures:
   
   ```
   ozone-basic :: Smoketest ozone cluster startup
   
==
   Check webui static resources  | PASS 
|
   
--
   Start freon testing   | FAIL 
|
   255 != 0
   
--
   ozone-basic :: Smoketest ozone cluster startup| FAIL 
|
   2 critical tests, 1 passed, 1 failed
   2 tests total, 1 passed, 1 failed
   ```
   
   https://issues.apache.org/jira/browse/HDDS-2313
   
   ## How was this patch tested?
   
   Ran `ozone` acceptance test.
   
   ```
   $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone
   $ ./test.sh
   ...
   ozone-basic :: Smoketest ozone cluster startup
   
==
   Check webui static resources  | PASS 
|
   
--
   Start freon testing   | PASS 
|
   
--
   ozone-basic :: Smoketest ozone cluster startup| PASS 
|
   2 critical tests, 2 passed, 0 failed
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2313) Duplicate release of lock in OMKeyCommitRequest

2019-10-15 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2313:
--

 Summary: Duplicate release of lock in OMKeyCommitRequest
 Key: HDDS-2313
 URL: https://issues.apache.org/jira/browse/HDDS-2313
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.5.0
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{noformat}
om_1| 2019-10-16 05:33:57,413 [IPC Server handler 19 on 9862] ERROR 
 - Trying to release the lock on /bypdd/mybucket4, which was never acquired.
om_1| 2019-10-16 05:33:57,414 WARN ipc.Server: IPC Server handler 19 on 
9862, call Call#4 Retry#8 
org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol.submitRequest from 
172.29.0.4:37018
om_1| java.lang.IllegalMonitorStateException: Releasing lock on 
resource /bypdd/mybucket4 without acquiring lock
om_1|   at 
org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
om_1|   at 
org.apache.hadoop.ozone.lock.LockManager.release(LockManager.java:168)
om_1|   at 
org.apache.hadoop.ozone.lock.LockManager.writeUnlock(LockManager.java:148)
om_1|   at 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.unlock(OzoneManagerLock.java:364)
om_1|   at 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.releaseWriteLock(OzoneManagerLock.java:329)
om_1|   at 
org.apache.hadoop.ozone.om.request.key.OMKeyCommitRequest.validateAndUpdateCache(OMKeyCommitRequest.java:177)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #34: HDDS-2312. Fix typo in ozone command

2019-10-15 Thread GitBox
adoroszlai opened a new pull request #34: HDDS-2312. Fix typo in ozone command
URL: https://github.com/apache/hadoop-ozone/pull/34
 
 
   ## What changes were proposed in this pull request?
   
   Trivial typo fix.
   
   https://issues.apache.org/jira/browse/HDDS-2312
   
   ## How was this patch tested?
   
   ```
   $ docker-compose exec scm ozone
   ...
   insight   tool to get runtime operation information
   ...
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2312) Fix typo in ozone command

2019-10-15 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2312:
--

 Summary: Fix typo in ozone command
 Key: HDDS-2312
 URL: https://issues.apache.org/jira/browse/HDDS-2312
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone CLI
Affects Versions: 0.5.0
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{noformat:title=ozone}
Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
...
insight   tool to get runtime opeartion information
...
{noformat}

Should be "operation".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #33: Hdds 1985

2019-10-15 Thread GitBox
bharatviswa504 opened a new pull request #33: Hdds 1985
URL: https://github.com/apache/hadoop-ozone/pull/33
 
 
   https://issues.apache.org/jira/browse/HDDS-1985
   
   No fix is required for this, as the information is retrieved from the MPU 
Key table, this information is not retrieved through RocksDB Table iteration. 
(As when we use get() this checks from cache first, and then it checks table)
   

   
   Used this Jira to add an integration test to verify the behavior.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2311) Fix logic in RetryPolicy in OzoneClientSideTranslatorPB

2019-10-15 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2311:


 Summary: Fix logic in RetryPolicy in OzoneClientSideTranslatorPB
 Key: HDDS-2311
 URL: https://issues.apache.org/jira/browse/HDDS-2311
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


OzoneManagerProtocolClientSideTranslatorPB.java

L251: if (cause instanceof NotLeaderException) {
 NotLeaderException notLeaderException = (NotLeaderException) cause;
 omFailoverProxyProvider.performFailoverIfRequired(
 notLeaderException.getSuggestedLeaderNodeId());
 return getRetryAction(RetryAction.RETRY, retries, failovers);
 }

 

The suggested leader returned from Server is not used during failOver, as the 
cause is a type of RemoteException. So with current code, it does not use 
suggested leader for failOver at all and by default with each OM, it tries max 
retries.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #32: Revert "HDDS 2181. Ozone Manager should send correct ACL type in ACL requests to Authorizer"

2019-10-15 Thread GitBox
xiaoyuyao merged pull request #32: Revert "HDDS 2181. Ozone Manager should send 
correct ACL type in ACL requests to Authorizer"
URL: https://github.com/apache/hadoop-ozone/pull/32
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao opened a new pull request #32: Revert "HDDS 2181. Ozone Manager should send correct ACL type in ACL requests to Authorizer"

2019-10-15 Thread GitBox
xiaoyuyao opened a new pull request #32: Revert "HDDS 2181. Ozone Manager 
should send correct ACL type in ACL requests to Authorizer"
URL: https://github.com/apache/hadoop-ozone/pull/32
 
 
   Reverts apache/hadoop-ozone#24


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2310) Add support to add ozone ranger plugin to Ozone Manager classpath

2019-10-15 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2310:


 Summary: Add support to add ozone ranger plugin to Ozone Manager 
classpath
 Key: HDDS-2310
 URL: https://issues.apache.org/jira/browse/HDDS-2310
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Manager
Affects Versions: 0.5.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, there is no way to add Ozone Ranger plugin to Ozone Manager 
classpath. 

We should be able to set an environment variable that will be respected by 
ozone and added to Ozone Manager classpath.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx opened a new pull request #31: HDDS-2254 : Fix flaky unit test TestContainerStateMachine#testRatisSn…

2019-10-15 Thread GitBox
avijayanhwx opened a new pull request #31: HDDS-2254 : Fix flaky unit test 
TestContainerStateMachine#testRatisSn…
URL: https://github.com/apache/hadoop-ozone/pull/31
 
 
   …apshotRetention.
   
   ## What changes were proposed in this pull request?
   On locally trying out repeated runs of the unit test, the unit test failed 
intermittently while asserting "Null" value for CSM snapshot. This assertion is 
not valid when the other unit test in the class executes before and creates 
keys in the cluster/container. Hence, moved to a model where each unit test 
creates its own cluster.
   
   https://issues.apache.org/jira/browse/HDDS-2254
   
   ## How was this patch tested?
   Ran the unit tests in the IDE and command line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #30: HDDS-1988. Fix listParts API.

2019-10-15 Thread GitBox
bharatviswa504 opened a new pull request #30: HDDS-1988. Fix listParts API.
URL: https://github.com/apache/hadoop-ozone/pull/30
 
 
   https://issues.apache.org/jira/browse/HDDS-1988
   
   We don't need any fix for List Parts API, as the information of all uploaded 
parts is stored as key-value in MPU Table(To retrieve this information, we are 
not iterating the RocksDB table). Added Integration test for this scenario, to 
verify it is working properly or not.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #7: HDDS-1228. Chunk Scanner Checkpoints

2019-10-15 Thread GitBox
adoroszlai commented on issue #7: HDDS-1228. Chunk Scanner Checkpoints
URL: https://github.com/apache/hadoop-ozone/pull/7#issuecomment-542352309
 
 
   @arp7 thanks for the comments on the original PR.  I've updated this one to 
use `Optional`, and while there, use `Instant` instead of `Long` to make it 
more type-safe.  Unfortunately `snakeyaml` doesn't work well with either of 
those, so I kept a `Long` member in the background for serialization.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14890) Setting permissions on name directory fails on non posix compliant filesystems

2019-10-15 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle resolved HDFS-14890.

Resolution: Fixed

[~mohansella] Could you please file a Jira with the exception/log for 
permission setting on Windows env. I am marking this Jira as resolved since the 
NN failure if not an issue with the patch.

> Setting permissions on name directory fails on non posix compliant filesystems
> --
>
> Key: HDFS-14890
> URL: https://issues.apache.org/jira/browse/HDFS-14890
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: Windows 10.
>Reporter: hirik
>Assignee: Siddharth Wagle
>Priority: Blocker
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14890.01.patch
>
>
> Hi,
> HDFS NameNode and JournalNode are not starting in Windows machine. Found 
> below related exception in logs. 
> Caused by: java.lang.UnsupportedOperationExceptionCaused by: 
> java.lang.UnsupportedOperationException
> at java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2155)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:452)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
> at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1206)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:422)
> at 
> com.slog.dfs.hdfs.nn.NameNodeServiceImpl.delayedStart(NameNodeServiceImpl.java:147)
>  
> Code changes related to this issue: 
> [https://github.com/apache/hadoop/commit/07e3cf952eac9e47e7bd5e195b0f9fc28c468313#diff-1a56e69d50f21b059637cfcbf1d23f11]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2295) Display log of freon on the standard output

2019-10-15 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-2295.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

Thanks [~elek] for the contribution and all for the reviews. I've merged the 
changes.

> Display log of freon on the standard output
> ---
>
> Key: HDDS-2295
> URL: https://issues.apache.org/jira/browse/HDDS-2295
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-2042 disabled the console logging for all of the ozone command line 
> tools including freon.
> But freon is different, it has a different error handling model. For freon we 
> need all the log on the console.
>  1. To follow all the different errors
>  2. To get information about the used (random) prefix which can be reused 
> during the validation phase.
>  
> I propose to restore the original behavior for Ozone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #14: HDDS-2295. Display log of freon on the standard output

2019-10-15 Thread GitBox
xiaoyuyao commented on a change in pull request #14: HDDS-2295. Display log of 
freon on the standard output
URL: https://github.com/apache/hadoop-ozone/pull/14#discussion_r335114183
 
 

 ##
 File path: hadoop-ozone/common/src/main/bin/ozone
 ##
 @@ -124,7 +124,7 @@ function ozonecmd_case
 ;;
 freon)
   HADOOP_CLASSNAME=org.apache.hadoop.ozone.freon.Freon
-  OZONE_FREON_OPTS="${OZONE_FREON_OPTS} -Dhadoop.log.file=ozone-freon.log 
-Dlog4j.configuration=file:${ozone_shell_log4j}"
+  OZONE_FREON_OPTS="${OZONE_FREON_OPTS}"
 
 Review comment:
   Make sense to me. +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #14: HDDS-2295. Display log of freon on the standard output

2019-10-15 Thread GitBox
xiaoyuyao merged pull request #14: HDDS-2295. Display log of freon on the 
standard output
URL: https://github.com/apache/hadoop-ozone/pull/14
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] jojochuang commented on a change in pull request #2: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-15 Thread GitBox
jojochuang commented on a change in pull request #2: HDDS-1737. Add Volume 
check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop-ozone/pull/2#discussion_r335109853
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java
 ##
 @@ -152,6 +152,38 @@ public void testValidateAndUpdateCache() throws Exception 
{
 
   }
 
+  @Test
+  public void testValidateAndUpdateCacheWithVolumeNotFound() throws Exception {
+String volumeName = "vol1";
+String bucketName = "bucket1";
+String keyName = RandomStringUtils.randomAlphabetic(5);
+for (int i =0; i< 3; i++) {
+  keyName += "/" + RandomStringUtils.randomAlphabetic(5);
+}
+
+OMRequest omRequest = createDirectoryRequest(volumeName, bucketName,
+keyName);
+OMDirectoryCreateRequest omDirectoryCreateRequest =
+new OMDirectoryCreateRequest(omRequest);
+
+OMRequest modifiedOmRequest =
+omDirectoryCreateRequest.preExecute(ozoneManager);
+
+omDirectoryCreateRequest = new OMDirectoryCreateRequest(modifiedOmRequest);
+
+OMClientResponse omClientResponse =
+omDirectoryCreateRequest.validateAndUpdateCache(ozoneManager, 100L,
+ozoneManagerDoubleBufferHelper);
+
+Assert.assertTrue(omClientResponse.getOMResponse().getStatus()
 
 Review comment:
   use Assert.assertEquals() instead


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] jojochuang commented on a change in pull request #2: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-15 Thread GitBox
jojochuang commented on a change in pull request #2: HDDS-1737. Add Volume 
check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop-ozone/pull/2#discussion_r335110340
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java
 ##
 @@ -152,6 +152,38 @@ public void testValidateAndUpdateCache() throws Exception 
{
 
   }
 
+  @Test
+  public void testValidateAndUpdateCacheWithVolumeNotFound() throws Exception {
+String volumeName = "vol1";
+String bucketName = "bucket1";
+String keyName = RandomStringUtils.randomAlphabetic(5);
+for (int i =0; i< 3; i++) {
+  keyName += "/" + RandomStringUtils.randomAlphabetic(5);
+}
+
+OMRequest omRequest = createDirectoryRequest(volumeName, bucketName,
+keyName);
+OMDirectoryCreateRequest omDirectoryCreateRequest =
+new OMDirectoryCreateRequest(omRequest);
+
+OMRequest modifiedOmRequest =
+omDirectoryCreateRequest.preExecute(ozoneManager);
+
+omDirectoryCreateRequest = new OMDirectoryCreateRequest(modifiedOmRequest);
+
+OMClientResponse omClientResponse =
+omDirectoryCreateRequest.validateAndUpdateCache(ozoneManager, 100L,
+ozoneManagerDoubleBufferHelper);
+
+Assert.assertTrue(omClientResponse.getOMResponse().getStatus()
+== OzoneManagerProtocolProtos.Status.VOLUME_NOT_FOUND);
+
+// Key should not exist in DB
+Assert.assertTrue(omMetadataManager.getKeyTable().get(
 
 Review comment:
   use assertNull()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #24: HDDS 2181. Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-15 Thread GitBox
xiaoyuyao merged pull request #24: HDDS 2181. Ozone Manager should send correct 
ACL type in ACL requests to Authorizer
URL: https://github.com/apache/hadoop-ozone/pull/24
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #24: HDDS 2181. Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-15 Thread GitBox
xiaoyuyao commented on a change in pull request #24: HDDS 2181. Ozone Manager 
should send correct ACL type in ACL requests to Authorizer
URL: https://github.com/apache/hadoop-ozone/pull/24#discussion_r335101564
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneNativeAuthorizer.java
 ##
 @@ -77,25 +80,52 @@ public boolean checkAccess(IOzoneObj ozObject, 
RequestContext context)
   "configured to work with OzoneObjInfo type only.", INVALID_REQUEST);
 }
 
+// For CREATE and DELETE acl requests, the parents need to be checked
+// for WRITE acl. If Key create request is received, then we need to
+// check if user has WRITE acl set on Bucket and Volume. In all other cases
+// the parents also need to be checked for the same acl type.
+if (isACLTypeCreate || isACLTypeDelete) {
+  parentContext = RequestContext.newBuilder()
+.setClientUgi(context.getClientUgi())
+.setIp(context.getIp())
+.setAclType(context.getAclType())
+.setAclRights(ACLType.WRITE)
+.build();
+} else {
+  parentContext = context;
+}
+
 switch (objInfo.getResourceType()) {
 case VOLUME:
   LOG.trace("Checking access for volume: {}", objInfo);
   return volumeManager.checkAccess(objInfo, context);
 case BUCKET:
   LOG.trace("Checking access for bucket: {}", objInfo);
-  return (bucketManager.checkAccess(objInfo, context)
-  && volumeManager.checkAccess(objInfo, context));
+  // Skip bucket access check for CREATE acl since
+  // bucket will not exist at the time of creation
+  boolean bucketAccess = isACLTypeCreate
+  || bucketManager.checkAccess(objInfo, context);
+  return (bucketAccess
+  && volumeManager.checkAccess(objInfo, parentContext));
 case KEY:
   LOG.trace("Checking access for Key: {}", objInfo);
-  return (keyManager.checkAccess(objInfo, context)
-  && prefixManager.checkAccess(objInfo, context)
-  && bucketManager.checkAccess(objInfo, context)
-  && volumeManager.checkAccess(objInfo, context));
+  // Skip key access check for CREATE acl since
+  // key will not exist at the time of creation
+  boolean keyAccess = isACLTypeCreate
 
 Review comment:
   This can be done as a refactor later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #24: HDDS 2181. Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-15 Thread GitBox
xiaoyuyao commented on a change in pull request #24: HDDS 2181. Ozone Manager 
should send correct ACL type in ACL requests to Authorizer
URL: https://github.com/apache/hadoop-ozone/pull/24#discussion_r335097267
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneNativeAuthorizer.java
 ##
 @@ -77,25 +80,52 @@ public boolean checkAccess(IOzoneObj ozObject, 
RequestContext context)
   "configured to work with OzoneObjInfo type only.", INVALID_REQUEST);
 }
 
+// For CREATE and DELETE acl requests, the parents need to be checked
+// for WRITE acl. If Key create request is received, then we need to
+// check if user has WRITE acl set on Bucket and Volume. In all other cases
+// the parents also need to be checked for the same acl type.
+if (isACLTypeCreate || isACLTypeDelete) {
+  parentContext = RequestContext.newBuilder()
+.setClientUgi(context.getClientUgi())
+.setIp(context.getIp())
+.setAclType(context.getAclType())
+.setAclRights(ACLType.WRITE)
+.build();
+} else {
+  parentContext = context;
+}
+
 switch (objInfo.getResourceType()) {
 case VOLUME:
   LOG.trace("Checking access for volume: {}", objInfo);
   return volumeManager.checkAccess(objInfo, context);
 case BUCKET:
   LOG.trace("Checking access for bucket: {}", objInfo);
-  return (bucketManager.checkAccess(objInfo, context)
-  && volumeManager.checkAccess(objInfo, context));
+  // Skip bucket access check for CREATE acl since
+  // bucket will not exist at the time of creation
+  boolean bucketAccess = isACLTypeCreate
+  || bucketManager.checkAccess(objInfo, context);
+  return (bucketAccess
+  && volumeManager.checkAccess(objInfo, parentContext));
 case KEY:
   LOG.trace("Checking access for Key: {}", objInfo);
-  return (keyManager.checkAccess(objInfo, context)
-  && prefixManager.checkAccess(objInfo, context)
-  && bucketManager.checkAccess(objInfo, context)
-  && volumeManager.checkAccess(objInfo, context));
+  // Skip key access check for CREATE acl since
+  // key will not exist at the time of creation
+  boolean keyAccess = isACLTypeCreate
 
 Review comment:
   NIT: I think we can move the ACLType.CREATE special handling logic inside 
keyManager.checkAccess() where we have another special handling for 
ACLType.WRITE. This way, the implementation inside OzoneNativeAuthorizer will 
be much cleaner. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #20: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states

2019-10-15 Thread GitBox
anuengineer merged pull request #20: HDDS-2196 Add CLI Commands and Protobuf 
messages to trigger decom states 
URL: https://github.com/apache/hadoop-ozone/pull/20
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14909) DFSNetworkTopology#chooseRandomWithStorageType() should not decrese storage count for excluded node which is already part of excluded scope

2019-10-15 Thread Surendra Singh Lilhore (Jira)
Surendra Singh Lilhore created HDFS-14909:
-

 Summary: DFSNetworkTopology#chooseRandomWithStorageType() should 
not decrese storage count for excluded node which is already part of excluded 
scope 
 Key: HDFS-14909
 URL: https://issues.apache.org/jira/browse/HDFS-14909
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.1.1
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states

2019-10-15 Thread GitBox
sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands 
and Protobuf messages to trigger decom states 
URL: https://github.com/apache/hadoop-ozone/pull/20#discussion_r334997811
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeDecommissionManager.java
 ##
 @@ -0,0 +1,286 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.UnknownHostException;
+import java.util.LinkedList;
+import java.util.List;
+
+/**
+ * Class used to manage datanodes scheduled for maintenance or decommission.
+ */
+public class NodeDecommissionManager {
+
+  private NodeManager nodeManager;
+  private PipelineManager pipeLineManager;
+  private ContainerManager containerManager;
+  private OzoneConfiguration conf;
+  private boolean useHostnames;
+
+  private List pendingNodes = new LinkedList<>();
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeAdminManager.class);
+
+
+  static class HostDefinition {
+private String rawHostname;
+private String hostname;
+private int port;
+
+HostDefinition(String hostname) throws InvalidHostStringException {
+  this.rawHostname = hostname;
+  parseHostname();
+}
+
+public String getRawHostname() {
+  return rawHostname;
+}
+
+public String getHostname() {
+  return hostname;
+}
+
+public int getPort() {
+  return port;
+}
+
+private void parseHostname() throws InvalidHostStringException{
+  try {
+// A URI *must* have a scheme, so just create a fake one
+URI uri = new URI("my://"+rawHostname.trim());
+this.hostname = uri.getHost();
+this.port = uri.getPort();
+
+if (this.hostname == null) {
+  throw new InvalidHostStringException("The string "+rawHostname+
+  " does not contain a value hostname or hostname:port 
definition");
+}
+  } catch (URISyntaxException e) {
+throw new InvalidHostStringException(
+"Unable to parse the hoststring "+rawHostname, e);
+  }
+}
+  }
+
+  private List mapHostnamesToDatanodes(List hosts)
+  throws InvalidHostStringException {
+List results = new LinkedList<>();
+for (String hostString : hosts) {
+  HostDefinition host = new HostDefinition(hostString);
+  InetAddress addr;
+  try {
+addr = InetAddress.getByName(host.getHostname());
+  } catch (UnknownHostException e) {
+throw new InvalidHostStringException("Unable to resolve the host "
++host.getRawHostname(), e);
+  }
+  String dnsName;
+  if (useHostnames) {
+dnsName = addr.getHostName();
+  } else {
+dnsName = addr.getHostAddress();
+  }
+  List found = nodeManager.getNodesByAddress(dnsName);
+  if (found.size() == 0) {
+throw new InvalidHostStringException("The string " +
+host.getRawHostname()+" resolved to "+dnsName +
+" is not found in SCM");
+  } else if (found.size() == 1) {
+if (host.getPort() != -1 &&
+!validateDNPortMatch(host.getPort(), found.get(0))) {
+  throw new InvalidHostStringException("The string "+
+  host.getRawHostname()+" matched a single datanode, but the "+
+  "given port is not used by that Datanode");
+}
+results.add(found.get(0));
+  } else if (found.size() > 1) {
+DatanodeDetails match = null;
+for(DatanodeDetails dn : found) {

[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states

2019-10-15 Thread GitBox
sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands 
and Protobuf messages to trigger decom states 
URL: https://github.com/apache/hadoop-ozone/pull/20#discussion_r334997378
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeDecommissionManager.java
 ##
 @@ -0,0 +1,286 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.UnknownHostException;
+import java.util.LinkedList;
+import java.util.List;
+
+/**
+ * Class used to manage datanodes scheduled for maintenance or decommission.
+ */
+public class NodeDecommissionManager {
+
+  private NodeManager nodeManager;
+  private PipelineManager pipeLineManager;
+  private ContainerManager containerManager;
+  private OzoneConfiguration conf;
+  private boolean useHostnames;
+
+  private List pendingNodes = new LinkedList<>();
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeAdminManager.class);
+
+
+  static class HostDefinition {
+private String rawHostname;
+private String hostname;
+private int port;
+
+HostDefinition(String hostname) throws InvalidHostStringException {
+  this.rawHostname = hostname;
+  parseHostname();
+}
+
+public String getRawHostname() {
+  return rawHostname;
+}
+
+public String getHostname() {
+  return hostname;
+}
+
+public int getPort() {
+  return port;
+}
+
+private void parseHostname() throws InvalidHostStringException{
+  try {
+// A URI *must* have a scheme, so just create a fake one
+URI uri = new URI("my://"+rawHostname.trim());
+this.hostname = uri.getHost();
+this.port = uri.getPort();
+
+if (this.hostname == null) {
+  throw new InvalidHostStringException("The string "+rawHostname+
+  " does not contain a value hostname or hostname:port 
definition");
+}
+  } catch (URISyntaxException e) {
+throw new InvalidHostStringException(
+"Unable to parse the hoststring "+rawHostname, e);
+  }
+}
+  }
+
+  private List mapHostnamesToDatanodes(List hosts)
+  throws InvalidHostStringException {
+List results = new LinkedList<>();
+for (String hostString : hosts) {
+  HostDefinition host = new HostDefinition(hostString);
+  InetAddress addr;
+  try {
+addr = InetAddress.getByName(host.getHostname());
+  } catch (UnknownHostException e) {
+throw new InvalidHostStringException("Unable to resolve the host "
++host.getRawHostname(), e);
+  }
+  String dnsName;
+  if (useHostnames) {
+dnsName = addr.getHostName();
+  } else {
+dnsName = addr.getHostAddress();
+  }
+  List found = nodeManager.getNodesByAddress(dnsName);
+  if (found.size() == 0) {
+throw new InvalidHostStringException("The string " +
+host.getRawHostname()+" resolved to "+dnsName +
+" is not found in SCM");
+  } else if (found.size() == 1) {
+if (host.getPort() != -1 &&
+!validateDNPortMatch(host.getPort(), found.get(0))) {
+  throw new InvalidHostStringException("The string "+
+  host.getRawHostname()+" matched a single datanode, but the "+
+  "given port is not used by that Datanode");
+}
+results.add(found.get(0));
+  } else if (found.size() > 1) {
+DatanodeDetails match = null;
+for(DatanodeDetails dn : found) {

[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states

2019-10-15 Thread GitBox
sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands 
and Protobuf messages to trigger decom states 
URL: https://github.com/apache/hadoop-ozone/pull/20#discussion_r334984970
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeDecommissionManager.java
 ##
 @@ -0,0 +1,286 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.UnknownHostException;
+import java.util.LinkedList;
+import java.util.List;
+
+/**
+ * Class used to manage datanodes scheduled for maintenance or decommission.
+ */
+public class NodeDecommissionManager {
+
+  private NodeManager nodeManager;
+  private PipelineManager pipeLineManager;
+  private ContainerManager containerManager;
+  private OzoneConfiguration conf;
+  private boolean useHostnames;
+
+  private List pendingNodes = new LinkedList<>();
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeAdminManager.class);
+
+
+  static class HostDefinition {
+private String rawHostname;
+private String hostname;
+private int port;
+
+HostDefinition(String hostname) throws InvalidHostStringException {
+  this.rawHostname = hostname;
+  parseHostname();
+}
+
+public String getRawHostname() {
+  return rawHostname;
+}
+
+public String getHostname() {
+  return hostname;
+}
+
+public int getPort() {
+  return port;
+}
+
+private void parseHostname() throws InvalidHostStringException{
+  try {
+// A URI *must* have a scheme, so just create a fake one
+URI uri = new URI("my://"+rawHostname.trim());
+this.hostname = uri.getHost();
+this.port = uri.getPort();
+
+if (this.hostname == null) {
+  throw new InvalidHostStringException("The string "+rawHostname+
+  " does not contain a value hostname or hostname:port 
definition");
+}
+  } catch (URISyntaxException e) {
+throw new InvalidHostStringException(
+"Unable to parse the hoststring "+rawHostname, e);
+  }
+}
+  }
+
+  private List mapHostnamesToDatanodes(List hosts)
+  throws InvalidHostStringException {
+List results = new LinkedList<>();
+for (String hostString : hosts) {
+  HostDefinition host = new HostDefinition(hostString);
+  InetAddress addr;
+  try {
+addr = InetAddress.getByName(host.getHostname());
+  } catch (UnknownHostException e) {
+throw new InvalidHostStringException("Unable to resolve the host "
++host.getRawHostname(), e);
+  }
+  String dnsName;
+  if (useHostnames) {
+dnsName = addr.getHostName();
+  } else {
+dnsName = addr.getHostAddress();
+  }
+  List found = nodeManager.getNodesByAddress(dnsName);
+  if (found.size() == 0) {
+throw new InvalidHostStringException("The string " +
+host.getRawHostname()+" resolved to "+dnsName +
+" is not found in SCM");
+  } else if (found.size() == 1) {
+if (host.getPort() != -1 &&
+!validateDNPortMatch(host.getPort(), found.get(0))) {
+  throw new InvalidHostStringException("The string "+
+  host.getRawHostname()+" matched a single datanode, but the "+
+  "given port is not used by that Datanode");
+}
+results.add(found.get(0));
+  } else if (found.size() > 1) {
+DatanodeDetails match = null;
+for(DatanodeDetails dn : found) {

[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states

2019-10-15 Thread GitBox
sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands 
and Protobuf messages to trigger decom states 
URL: https://github.com/apache/hadoop-ozone/pull/20#discussion_r334980075
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeDecommissionManager.java
 ##
 @@ -0,0 +1,286 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.UnknownHostException;
+import java.util.LinkedList;
+import java.util.List;
+
+/**
+ * Class used to manage datanodes scheduled for maintenance or decommission.
+ */
+public class NodeDecommissionManager {
+
+  private NodeManager nodeManager;
+  private PipelineManager pipeLineManager;
+  private ContainerManager containerManager;
+  private OzoneConfiguration conf;
+  private boolean useHostnames;
+
+  private List pendingNodes = new LinkedList<>();
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeAdminManager.class);
+
+
+  static class HostDefinition {
+private String rawHostname;
+private String hostname;
+private int port;
+
+HostDefinition(String hostname) throws InvalidHostStringException {
+  this.rawHostname = hostname;
+  parseHostname();
+}
+
+public String getRawHostname() {
+  return rawHostname;
+}
+
+public String getHostname() {
+  return hostname;
+}
+
+public int getPort() {
+  return port;
+}
+
+private void parseHostname() throws InvalidHostStringException{
+  try {
+// A URI *must* have a scheme, so just create a fake one
+URI uri = new URI("my://"+rawHostname.trim());
 
 Review comment:
   I changed it to empty. I only used the URI class as it makes it easier to 
parse the host / IP and port which can be in many different formats. The 
alternative is a regex, which could get messy with different IP address formats 
etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states

2019-10-15 Thread GitBox
sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands 
and Protobuf messages to trigger decom states 
URL: https://github.com/apache/hadoop-ozone/pull/20#discussion_r334978915
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/node/DatanodeAdminMaintenanceSubCommand.java
 ##
 @@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.node;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.ParentCommand;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * Place one or more datanodes into Maintenance Mode.
+ */
+@Command(
+name = "maintenance",
+description = "Put a datanode into Maintenance Mode",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class DatanodeAdminMaintenanceSubCommand implements Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeAdminMaintenanceSubCommand.class);
+
+  @CommandLine.Parameters(description = "List of fully qualified host names")
+  private List hosts = new ArrayList();
+
+  @CommandLine.Option(names = {"--end"},
+  description = "Automatically end maintenance after the given hours. "+
+  "By default, maintenance must be ended manually.")
+  private int endInHours = 0;
 
 Review comment:
   Right now the value is not used anywhere, but I think I intended for zero to 
be "stay in maintenance forever", but in the Javadoc for 
SCMClient.startMaintenanceNodes() I stated a null would indicate the node will 
stay in maintenance forever. I have changed the javadoc to reflect that passing 
zero means "forever".


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states

2019-10-15 Thread GitBox
sodonnel commented on a change in pull request #20: HDDS-2196 Add CLI Commands 
and Protobuf messages to trigger decom states 
URL: https://github.com/apache/hadoop-ozone/pull/20#discussion_r334974319
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/node/DatanodeAdminCommands.java
 ##
 @@ -0,0 +1,55 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.node;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.cli.MissingSubcommandException;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.ParentCommand;
+import org.apache.hadoop.hdds.scm.cli.SCMCLI;
+
+import java.util.concurrent.Callable;
+
+/**
+ * Subcommand to group datanode admin related operations.
+ */
+@Command(
+name = "dnadmin",
+description = "Datanode Administration specific operations",
 
 Review comment:
   I fixed this one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.

2019-10-15 Thread Jinglun (Jira)
Jinglun created HDFS-14908:
--

 Summary: LeaseManager should check parent-child relationship when 
filter open files.
 Key: HDFS-14908
 URL: https://issues.apache.org/jira/browse/HDFS-14908
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jinglun
Assignee: Jinglun


Now when doing listOpenFiles(), LeaseManager only checks whether the filter 
path is the prefix of the open files. We should check whether the filter path 
is the parent/ancestor of the open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [328K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/475/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   
htt

[jira] [Resolved] (HDDS-2305) Update Ozone to latest ratis snapshot(0.5.0-3f446aa-SNAPSHOT)

2019-10-15 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HDDS-2305.
---
Fix Version/s: 0.5.0
   Resolution: Fixed

> Update Ozone to latest ratis snapshot(0.5.0-3f446aa-SNAPSHOT)
> -
>
> Key: HDDS-2305
> URL: https://issues.apache.org/jira/browse/HDDS-2305
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This jira will update ozone to latest ratis snapshot. for commit 
> corresponding to 
> {code}
> commit 3f446aaf27704b0bf929bd39887637a6a71b4418 (HEAD -> master, 
> origin/master, origin/HEAD)
> Author: Tsz Wo Nicholas Sze 
> Date:   Fri Oct 11 16:35:38 2019 +0800
> RATIS-705. GrpcClientProtocolClient#close Interrupts itself.  Contributed 
> by Lokesh Jain
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 merged pull request #26: HDDS-2305. Update Ozone to latest ratis snapshot(0.5.0-3f446aa-SNAPSHOT). Contributed by Mukul Kumar Singh.

2019-10-15 Thread GitBox
nandakumar131 merged pull request #26: HDDS-2305. Update Ozone to latest ratis 
snapshot(0.5.0-3f446aa-SNAPSHOT). Contributed by  Mukul Kumar Singh.
URL: https://github.com/apache/hadoop-ozone/pull/26
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2194) Replication of Container fails with "Only closed containers could be exported"

2019-10-15 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HDDS-2194.
---
Fix Version/s: 0.5.0
   Resolution: Fixed

> Replication of Container fails with "Only closed containers could be exported"
> --
>
> Key: HDDS-2194
> URL: https://issues.apache.org/jira/browse/HDDS-2194
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Replication of Container fails with "Only closed containers could be exported"
> cc: [~nanda]
> {code}
> 2019-09-26 15:00:17,640 [grpc-default-executor-13] INFO  
> replication.GrpcReplicationService (GrpcReplicationService.java:download(57)) 
> - Streaming container data (37) to other
> datanode
> Sep 26, 2019 3:00:17 PM 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor run
> SEVERE: Exception while executing runnable 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed@70e641f2
> java.lang.IllegalStateException: Only closed containers could be exported: 
> ContainerId=37
> 2019-09-26 15:00:17,644 [grpc-default-executor-17] ERROR 
> replication.GrpcReplicationClient (GrpcReplicationClient.java:onError(142)) - 
> Container download was unsuccessfull
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.exportContainerData(KeyValueContainer.java:527)
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNKNOWN
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.exportContainer(KeyValueHandler.java:875)
> at 
> org.apache.ratis.thirdparty.io.grpc.Status.asRuntimeException(Status.java:526)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.exportContainer(ContainerController.java:134)
> at 
> org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:434)
> at 
> org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource.copyData(OnDemandContainerReplicationSource
>  at 
> org.apache.ratis.thirdparty.io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
> .java:64)
> at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
> at 
> org.apache.hadoop.ozone.container.replication.GrpcReplicationService.download(GrpcReplicationService.java:63)
> at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClient
>  at 
> org.apache.hadoop.hdds.protocol.datanode.proto.IntraDatanodeProtocolServiceGrpc$MethodHandlers.invoke(IntraDatanodeProtocolSCallListener.java:40)
> erviceGrpc.java:217)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:678)
> at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.
>  at 
> org.apache.ratis.thirdparty.io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
> java:171)
> at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:283)
> at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClient
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:710)
> CallListener.java:40)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.ja
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
> va:397)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at 
> org.apache.ratis.thirdpa

[GitHub] [hadoop-ozone] nandakumar131 commented on issue #25: HDDS-2194. Replication of Container fails with "Only closed containers could be exported"

2019-10-15 Thread GitBox
nandakumar131 commented on issue #25: HDDS-2194. Replication of Container fails 
with "Only closed containers could be exported"
URL: https://github.com/apache/hadoop-ozone/pull/25#issuecomment-542193314
 
 
   Thanks @bharatviswa504 for the contribution and @adoroszlai for review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 merged pull request #25: HDDS-2194. Replication of Container fails with "Only closed containers could be exported"

2019-10-15 Thread GitBox
nandakumar131 merged pull request #25: HDDS-2194. Replication of Container 
fails with "Only closed containers could be exported"
URL: https://github.com/apache/hadoop-ozone/pull/25
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on issue #25: HDDS-2194. Replication of Container fails with "Only closed containers could be exported"

2019-10-15 Thread GitBox
nandakumar131 commented on issue #25: HDDS-2194. Replication of Container fails 
with "Only closed containers could be exported"
URL: https://github.com/apache/hadoop-ozone/pull/25#issuecomment-542192553
 
 
   Test failures are not related.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2299) BlockManager should allocate a block in excluded pipelines if none other left

2019-10-15 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HDDS-2299.
---
Fix Version/s: 0.5.0
   Resolution: Fixed

> BlockManager should allocate a block in excluded pipelines if none other left
> -
>
> Key: HDDS-2299
> URL: https://issues.apache.org/jira/browse/HDDS-2299
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In SCM, BlockManager#allocateBlock does not allocate a block in the excluded 
> pipelines or datanodes if requested by the client. But there can be cases 
> where excluded pipelines and datanodes are the only ones left. In such a case 
> SCM should allocate a block in such pipelines and return to the client. The 
> client can choose to use or discard the block.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 merged pull request #19: HDDS-2299. BlockManager should allocate a block in excluded pipelines if none other left

2019-10-15 Thread GitBox
nandakumar131 merged pull request #19: HDDS-2299. BlockManager should allocate 
a block in excluded pipelines if none other left
URL: https://github.com/apache/hadoop-ozone/pull/19
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on issue #19: HDDS-2299. BlockManager should allocate a block in excluded pipelines if none other left

2019-10-15 Thread GitBox
nandakumar131 commented on issue #19: HDDS-2299. BlockManager should allocate a 
block in excluded pipelines if none other left
URL: https://github.com/apache/hadoop-ozone/pull/19#issuecomment-542170454
 
 
   Test failures are not related, will merge this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2309) Optimise OzoneManagerDoubleBuffer::flushTransactions to flush in batches

2019-10-15 Thread Rajesh Balamohan (Jira)
Rajesh Balamohan created HDDS-2309:
--

 Summary: Optimise OzoneManagerDoubleBuffer::flushTransactions to 
flush in batches
 Key: HDDS-2309
 URL: https://issues.apache.org/jira/browse/HDDS-2309
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Rajesh Balamohan
 Attachments: Screenshot 2019-10-15 at 4.19.13 PM.png

When running a write heavy benchmark, 
{{{color:#00}org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.flushTransactions{color}}}
 was invoked for pretty much every write.

This forces \{{cleanupCache}} to be invoked which ends up choking in single 
thread executor. Attaching the profiler information which gives more details.

Ideally, {{flushTransactions}} should batch up the work to reduce load on 
rocksDB.

 

[https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java#L130]

 

[https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java#L322]

 (implementation of canFlush() has to be optimized for correct batching).

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14890) Setting permissions on name directory fails on non posix compliant filesystems

2019-10-15 Thread hirik (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hirik reopened HDFS-14890:
--

> Setting permissions on name directory fails on non posix compliant filesystems
> --
>
> Key: HDFS-14890
> URL: https://issues.apache.org/jira/browse/HDFS-14890
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: Windows 10.
>Reporter: hirik
>Assignee: Siddharth Wagle
>Priority: Blocker
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14890.01.patch
>
>
> Hi,
> HDFS NameNode and JournalNode are not starting in Windows machine. Found 
> below related exception in logs. 
> Caused by: java.lang.UnsupportedOperationExceptionCaused by: 
> java.lang.UnsupportedOperationException
> at java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2155)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:452)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
> at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1206)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:422)
> at 
> com.slog.dfs.hdfs.nn.NameNodeServiceImpl.delayedStart(NameNodeServiceImpl.java:147)
>  
> Code changes related to this issue: 
> [https://github.com/apache/hadoop/commit/07e3cf952eac9e47e7bd5e195b0f9fc28c468313#diff-1a56e69d50f21b059637cfcbf1d23f11]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on issue #2: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-15 Thread GitBox
cxorm commented on issue #2: HDDS-1737. Add Volume check in KeyManager and File 
Operations.
URL: https://github.com/apache/hadoop-ozone/pull/2#issuecomment-542108183
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi opened a new pull request #29: Hdds 2034

2019-10-15 Thread GitBox
ChenSammi opened a new pull request #29: Hdds 2034
URL: https://github.com/apache/hadoop-ozone/pull/29
 
 
   ## NOTICE
   
   Previous Hadoop trunk PR link: 
   https://github.com/apache/hadoop/pull/1650
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #14: HDDS-2295. Display log of freon on the standard output

2019-10-15 Thread GitBox
elek commented on issue #14: HDDS-2295. Display log of freon on the standard 
output
URL: https://github.com/apache/hadoop-ozone/pull/14#issuecomment-542081422
 
 
   > Here's some sample output. I find it confusing/messy.
   
   Try the same with errors/problems ;-) If you really hate these information 
we can hide it in case of using a flag...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on a change in pull request #14: HDDS-2295. Display log of freon on the standard output

2019-10-15 Thread GitBox
elek commented on a change in pull request #14: HDDS-2295. Display log of freon 
on the standard output
URL: https://github.com/apache/hadoop-ozone/pull/14#discussion_r334792208
 
 

 ##
 File path: hadoop-ozone/common/src/main/bin/ozone
 ##
 @@ -124,7 +124,7 @@ function ozonecmd_case
 ;;
 freon)
   HADOOP_CLASSNAME=org.apache.hadoop.ozone.freon.Freon
-  OZONE_FREON_OPTS="${OZONE_FREON_OPTS} -Dhadoop.log.file=ozone-freon.log 
-Dlog4j.configuration=file:${ozone_shell_log4j}"
+  OZONE_FREON_OPTS="${OZONE_FREON_OPTS}"
 
 Review comment:
   Without the specific ozone_shell_log4j configuration the default log4j would 
be used which prints out everything to the stdout on INFO level.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #14: HDDS-2295. Display log of freon on the standard output

2019-10-15 Thread GitBox
elek commented on issue #14: HDDS-2295. Display log of freon on the standard 
output
URL: https://github.com/apache/hadoop-ozone/pull/14#issuecomment-542080468
 
 
   > I'm not sure Freon is really that much different. It has the "progress 
bar" that should ideally not be messed up by the various log lines.
   
   It's developer tool, I think it's better to print out all the available 
development information. I am very happy to create a follow up jira and improve 
the progressbar. The current progressbar is very nice from cli but can't work 
very well in containerized environment.
   
   
   Freon is like a long-live service not a cli app. Similar to scm or om, which 
prints out all the problems to the stdout.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng opened a new pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-15 Thread GitBox
timmylicheng opened a new pull request #28: HDDS-1569 Support creating multiple 
pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28
 
 
   ## NOTICE
   
   Changes:
   1. Use PipelinePlacementPolicy as default for Factor THREE Ratis pipeline.
   2. Handle differently in some parts for Factor ONE and Factor THREE pipeline.
   3. Add limits for pipeline creation.
   4. Adjust a bunch of unit tests accordingly.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on issue #3: HDDS-2220. HddsVolume needs a toString method.

2019-10-15 Thread GitBox
cxorm commented on issue #3: HDDS-2220. HddsVolume needs a toString method.
URL: https://github.com/apache/hadoop-ozone/pull/3#issuecomment-542076945
 
 
   Thanks @xiaoyuyao and @bharatviswa504 for the review and commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2308) Switch to centos with the apache/ozone-build docker image

2019-10-15 Thread Marton Elek (Jira)
Marton Elek created HDDS-2308:
-

 Summary: Switch to centos with the apache/ozone-build docker image
 Key: HDDS-2308
 URL: https://issues.apache.org/jira/browse/HDDS-2308
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek
 Attachments: hs_err_pid16346.log

I realized multiple JVM crashes in the daily builds:

 
{code:java}

ERROR] ExecutionException The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called?
  
  
[ERROR] Command was /bin/sh -c cd /workdir/hadoop-ozone/ozonefs && 
/usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/workdir/hadoop-ozone/ozonefs/target/surefire/surefirebooter9018689154779946208.jar
 /workdir/hadoop-ozone/ozonefs/target/surefire 2019-10-06T14-52-40_697-jvmRun1 
surefire7569723928289175829tmp surefire_947955725320624341206tmp
  
  
[ERROR] Error occurred in starting fork, check output in log
  
  
[ERROR] Process Exit Code: 139
  
  
[ERROR] Crashed tests:
  
  
[ERROR] org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
  
  
[ERROR] ExecutionException The forked VM terminated without properly 
saying goodbye. VM crash or System.exit called?
  
  
[ERROR] Command was /bin/sh -c cd /workdir/hadoop-ozone/ozonefs && 
/usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/workdir/hadoop-ozone/ozonefs/target/surefire/surefirebooter5429192218879128313.jar
 /workdir/hadoop-ozone/ozonefs/target/surefire 2019-10-06T14-52-40_697-jvmRun1 
surefire7227403571189445391tmp surefire_1011197392458143645283tmp
  
  
[ERROR] Error occurred in starting fork, check output in log
  
  
[ERROR] Process Exit Code: 139
  
  
[ERROR] Crashed tests:
  
  
[ERROR] org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
  
  
[ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
  
  
[ERROR] Command was /bin/sh -c cd /workdir/hadoop-ozone/ozonefs && 
/usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/workdir/hadoop-ozone/ozonefs/target/surefire/surefirebooter1355604543311368443.jar
 /workdir/hadoop-ozone/ozonefs/target/surefire 2019-10-06T14-52-40_697-jvmRun1 
surefire3938612864214747736tmp surefire_933162535733309260236tmp
  
  
[ERROR] Error occurred in starting fork, check output in log
  
  
[ERROR] Process Exit Code: 139
  
  
[ERROR] ExecutionException The forked VM terminated without properly 
saying goodbye. VM crash or System.exit called?
  
  
[ERROR] Command was /bin/sh -c cd /workdir/hadoop-ozone/ozonefs && 
/usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/workdir/hadoop-ozone/ozonefs/target/surefire/surefirebooter9018689154779946208.jar
 /workdir/hadoop-ozone/ozonefs/target/surefire 2019-10-06T14-52-40_697-jvmRun1 
surefire7569723928289175829tmp surefire_947955725320624341206tmp
  
  
[ERROR] Error occurred in starting fork, check output in log
  
  
[ERROR] Process Exit Code: 139 {code}
 

Based on the crash log (uploaded) it's related to the rocksdb JNI interface.

In the current ozone-build docker image (which provides the environment for 
build) we use alpine where musl libc is used instead of the main glibc. I think 
it would be more safe to use the same glibc what is used in production.

I tested with centos based docker image and it seems to be more stable. Didn't 
see any more JVM crashes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2019-10-15 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-13762.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Closing this issue as all the sub tasks resolved. Thanks you [~PhiloHe] 
,[~Sammi] , [~rakeshr], [~weichiu] and [~anoop.hbase] for the contribution and 
reviews.

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> HDFS-13762.008.patch, HDFS_Persistent_Memory_Cache_Perf_Results.pdf, 
> SCMCacheDesign-2018-11-08.pdf, SCMCacheDesign-2019-07-12.pdf, 
> SCMCacheDesign-2019-07-16.pdf, SCMCacheDesign-2019-3-26.pdf, 
> SCMCacheTestPlan-2019-3-27.pdf, SCMCacheTestPlan.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org