[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-03-18 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794784#comment-16794784
 ] 

Surendra Singh Lilhore commented on HDFS-13972:
---

Thanks [~crh] for the patch.

One issue I observed here in webhdfs, not sure we should handle here or not.

RouterWebHdfsMethods#redirectURI (Line #399) is replacing only the routerId in 
redirected URL but is should replace the router token also with namenode token. 
It should be same as RouterWebHdfsMethods#redirectURI ((Line #504).
{code:java}
// We modify the namenode location and the path
redirectLocation = redirectLocation
.replaceAll("(?<=[?&;])namenoderpcaddress=.*?(?=[&;])",
"namenoderpcaddress=" + router.getRouterId())
.replaceAll("(?<=[/])webhdfs/v1/.*?(?=[?])",
"webhdfs/v1" + path);{code}

Currently CREATE will fail without this change.{color}

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch, 
> HDFS-13972-HDFS-13891.002.patch, HDFS-13972-HDFS-13891.003.patch, 
> HDFS-13972-HDFS-13891.004.patch, HDFS-13972-HDFS-13891.005.patch, 
> HDFS-13972-HDFS-13891.006.patch, HDFS-13972-HDFS-13891.007.patch, 
> HDFS-13972-HDFS-13891.008.patch, HDFS-13972-HDFS-13891.009.patch, 
> TestRouterWebHDFSContractTokens.java
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14369) RBF: Fix trailing "/" for webhdfs

2019-03-18 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14369:
-
Attachment: HDFS-14369-HDFS-13891-regressiontest-001.patch

> RBF: Fix trailing "/" for webhdfs
> -
>
> Key: HDFS-14369
> URL: https://issues.apache.org/jira/browse/HDFS-14369
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14369-HDFS-13891-regressiontest-001.patch
>
>
> WebHDFS doesn't trim trailing slash causing discrepancy in operations.
> Example below
> --
> Using HDFS API, two directory are listed.
> {code}
> $ hdfs dfs -ls hdfs://:/tmp/
> Found 2 items
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp1
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp2
> {code}
> Using WebHDFS API, only one directory is listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp/?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1552016766769,"owner":"hdfs","pathSuffix":"tmp1","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
> ]}}
> {code}
> The mount table is as follows:
> {code}
> $ hdfs dfsrouteradmin -ls /tmp
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /tmp  ns1->/tmp aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp1 ns1->/tmp/tmp1aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp2 ns2->/tmp/tmp2aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {code}
> Without trailing thrash, two directories are listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1541753421917,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753421917,"owner":"hdfs","pathSuffix":"tmp1","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"},
> {"accessTime":1541753429812,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753429812,"owner":"hdfs","pathSuffix":"tmp2","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"}
> ]}}
> {code}
> [~ajisakaa] Thanks for reporting this, I borrowed the text from 
> HDFS-13972



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1299) Support TokenIssuer interface to run MR/Spark with OzoneFileSystem in secure mode

2019-03-18 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794787#comment-16794787
 ] 

Xiaoyu Yao commented on HDDS-1299:
--

WIP branch at: [https://github.com/xiaoyuyao/hadoop/commits/HDDS-1299], will 
create PR when all the tests passed.

> Support TokenIssuer interface to run MR/Spark with OzoneFileSystem in secure 
> mode
> -
>
> Key: HDDS-1299
> URL: https://issues.apache.org/jira/browse/HDDS-1299
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>
> This ticket is opened to add TokenIssuer interface support to OzoneFileSystem 
> so that MR and Spark jobs can run with OzoneFileSystem in secure mode. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14369) RBF: Fix trailing "/" for webhdfs

2019-03-18 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794788#comment-16794788
 ] 

Akira Ajisaka commented on HDFS-14369:
--

Thanks [~crh] for filing this issue!
I wrote a regression test and then found you are assigned. Attaching a patch 
for reference.

> RBF: Fix trailing "/" for webhdfs
> -
>
> Key: HDFS-14369
> URL: https://issues.apache.org/jira/browse/HDFS-14369
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14369-HDFS-13891-regressiontest-001.patch
>
>
> WebHDFS doesn't trim trailing slash causing discrepancy in operations.
> Example below
> --
> Using HDFS API, two directory are listed.
> {code}
> $ hdfs dfs -ls hdfs://:/tmp/
> Found 2 items
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp1
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp2
> {code}
> Using WebHDFS API, only one directory is listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp/?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1552016766769,"owner":"hdfs","pathSuffix":"tmp1","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
> ]}}
> {code}
> The mount table is as follows:
> {code}
> $ hdfs dfsrouteradmin -ls /tmp
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /tmp  ns1->/tmp aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp1 ns1->/tmp/tmp1aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp2 ns2->/tmp/tmp2aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {code}
> Without trailing thrash, two directories are listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1541753421917,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753421917,"owner":"hdfs","pathSuffix":"tmp1","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"},
> {"accessTime":1541753429812,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753429812,"owner":"hdfs","pathSuffix":"tmp2","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"}
> ]}}
> {code}
> [~ajisakaa] Thanks for reporting this, I borrowed the text from 
> HDFS-13972



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1299) Support TokenIssuer interface for OzoneFileSystem

2019-03-18 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1299:
-
Summary: Support TokenIssuer interface for OzoneFileSystem  (was: Support 
TokenIssuer interface to run MR/Spark with OzoneFileSystem in secure mode)

> Support TokenIssuer interface for OzoneFileSystem
> -
>
> Key: HDDS-1299
> URL: https://issues.apache.org/jira/browse/HDDS-1299
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>
> This ticket is opened to add TokenIssuer interface support to OzoneFileSystem 
> so that MR and Spark jobs can run with OzoneFileSystem in secure mode. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-18 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1297:

Status: Patch Available  (was: Open)

> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> 
>
> Key: HDDS-1297
> URL: https://issues.apache.org/jira/browse/HDDS-1297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1297.001.patch
>
>
> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> {code}
> ava.lang.IllegalArgumentException: 30 is not within min = 500 or max = 
> 10
>   at 
> org.apache.hadoop.hdds.server.ServerUtils.sanitizeUserArgs(ServerUtils.java:66)
>   at 
> org.apache.hadoop.hdds.scm.HddsServerUtil.getStaleNodeInterval(HddsServerUtil.java:256)
>   at 
> org.apache.hadoop.hdds.scm.node.NodeStateManager.(NodeStateManager.java:136)
>   at 
> org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:105)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.initalizeSystemManagers(StorageContainerManager.java:391)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:286)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:218)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:684)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:628)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createSCM(MiniOzoneClusterImpl.java:458)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:392)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBothGetandPutSmallFile(TestOzoneContainer.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-18 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794804#comment-16794804
 ] 

Yiqun Lin commented on HDDS-1297:
-

Looking into this, the illegal failure was caused by the default value 
adjustments in HDDS-1284. We increased the 
\{{OZONE_SCM_STALENODE_INTERVAL_DEFAULT}} from \{{90s}} to \{{5m}}. Attach the 
fix patch. Increasing the value for 
\{{MiniOzoneCluster#DEFAULT_HB_PROCESSOR_INTERVAL_MS}}.

> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> 
>
> Key: HDDS-1297
> URL: https://issues.apache.org/jira/browse/HDDS-1297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1297.001.patch
>
>
> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> {code}
> ava.lang.IllegalArgumentException: 30 is not within min = 500 or max = 
> 10
>   at 
> org.apache.hadoop.hdds.server.ServerUtils.sanitizeUserArgs(ServerUtils.java:66)
>   at 
> org.apache.hadoop.hdds.scm.HddsServerUtil.getStaleNodeInterval(HddsServerUtil.java:256)
>   at 
> org.apache.hadoop.hdds.scm.node.NodeStateManager.(NodeStateManager.java:136)
>   at 
> org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:105)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.initalizeSystemManagers(StorageContainerManager.java:391)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:286)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:218)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:684)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:628)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createSCM(MiniOzoneClusterImpl.java:458)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:392)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBothGetandPutSmallFile(TestOzoneContainer.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-18 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1297:

Attachment: HDDS-1297.001.patch

> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> 
>
> Key: HDDS-1297
> URL: https://issues.apache.org/jira/browse/HDDS-1297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1297.001.patch
>
>
> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> {code}
> ava.lang.IllegalArgumentException: 30 is not within min = 500 or max = 
> 10
>   at 
> org.apache.hadoop.hdds.server.ServerUtils.sanitizeUserArgs(ServerUtils.java:66)
>   at 
> org.apache.hadoop.hdds.scm.HddsServerUtil.getStaleNodeInterval(HddsServerUtil.java:256)
>   at 
> org.apache.hadoop.hdds.scm.node.NodeStateManager.(NodeStateManager.java:136)
>   at 
> org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:105)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.initalizeSystemManagers(StorageContainerManager.java:391)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:286)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:218)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:684)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:628)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createSCM(MiniOzoneClusterImpl.java:458)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:392)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBothGetandPutSmallFile(TestOzoneContainer.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14315) Add more detailed log message when decreasing replication factor < max in snapshots

2019-03-18 Thread Daisuke Kobayashi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794825#comment-16794825
 ] 

Daisuke Kobayashi commented on HDFS-14315:
--

Thanks for sharing your findings here [~pifta]. Yup I was aware of that 
behavior too, but was not aware of HDFS-11146! 
 While HDFS-11146 gets resolved in near future, however, users are still hard 
to notice the replicas for files within snapshots don't get decreased 
immediately, under a particular condition, based on the current namenode's 
design. Hence my goal here is to add more hint into the namenode log. Does this 
make sense to you?

> Add more detailed log message when decreasing replication factor < max in 
> snapshots
> ---
>
> Key: HDFS-14315
> URL: https://issues.apache.org/jira/browse/HDFS-14315
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HDFS-14315.000.patch, HDFS-14315.001.patch
>
>
> When changing replication factor for a given file, the following 3 types of 
> logging appear in the namenode log, but more detailed message would be useful 
> especially when the file is in snapshot(s).
> {noformat}
> Decreasing replication from X to Y for FILE
> Increasing replication from X to Y for FILE
> Replication remains unchanged at X for FILE
> {noformat}
> I have added additional log messages as well as further test scenarios to 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication#testReplicationWithSnapshot.
> The test sequence is:
> 1) A file is created with replication factor 1 (there are 5 datanodes)
> 2) Take a snapshot and increase the factor by 1. Continue this loop up to 5.
> 3) Setrep back to 3, but the target replication is decided to 4, which is the 
> maximum in snapshots.
> {noformat}
> 2019-02-25 17:17:26,800 [IPC Server handler 9 on default port 55726] INFO  
> namenode.FSDirectory (FSDirAttrOp.java:unprotectedSetReplication(408)) - 
> Decreasing replication from 5 to 4 for /TestSnapshot/sub1/file1. Requested 
> value is 3, but 4 is the maximum in snapshots
> {noformat}
> 4) Setrep to 3 again, but it's unchanged as follows. Both 3) and 4) are 
> expected.
> {noformat}
> 2019-02-25 17:17:26,804 [IPC Server handler 6 on default port 55726] INFO  
> namenode.FSDirectory (FSDirAttrOp.java:unprotectedSetReplication(420)) - 
> Replication remains unchanged at 4 for /TestSnapshot/sub1/file1 . Requested 
> value is 3, but 4 is the maximum in snapshots.
> {noformat}
> 5) Make sure the number of replicas in datanodes remains 4.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-18 Thread Rakesh R (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794827#comment-16794827
 ] 

Rakesh R commented on HDFS-14355:
-

Thanks [~PhiloHe] for the good progress. Adding second set of review comments, 
please go through it.
 # Close {{file = new RandomAccessFile(filePath, "rw");}}
{code:java}

IOUtils.closeQuietly(file);
{code}

 # Looks like unused code, please remove it.
{code:java}
  private FsDatasetImpl dataset;

  public MemoryMappableBlockLoader(FsDatasetImpl dataset) {
this.dataset = dataset;
  }
{code}

 # FileMappableBlockLoader#loadVolumes exception handling. I feel this is not 
required, please remove it. If you still need this for some purpose, then 
please add message arg to {{IOException("Failed to parse persistent memory 
location " + location, e)}}
{code:java}
  } catch (IllegalArgumentException e) {
LOG.error("Failed to parse persistent memory location " + location +
" for " + e.getMessage());
throw new IOException(e);
  }
{code}

 # Debuggability: FileMappableBlockLoader#verifyIfValidPmemVolume. Here, add 
exception message arg to {{throw new IOException(t);}}
{code:java}
  throw new IOException(
  "Exception while writing data to persistent storage dir: " + pmemDir,
  t);
{code}

 # Debuggability: FileMappableBlockLoader#load. Here, add blockFileName to the 
exception message.
{code:java}
  if (out == null) {
throw new IOException("Fail to map the block " + blockFileName
+ " to persistent storage.");
  }
{code}

 # Debuggability: FileMappableBlockLoader#verifyChecksumAndMapBlock
{code:java}
  throw new IOException(
  "checksum verification failed for the blockfile:" + blockFileName
  + ":  premature EOF");
{code}

 # FileMappedBlock#afterCache. Suppressing exception may give wrong statistics, 
right? Assume, {{afterCache}} throws exception and not cached the file path. 
Here, the cached block won't be readable but unnecessarily consumes space. How 
about moving {{mappableBlock.afterCache();}} call right after 
{{mappableBlockLoader.load()}} function and add throws IOException clause to 
{{afterCache}} ?
{code:java}
  LOG.warn("Fail to find the replica file of PoolID = " +
  key.getBlockPoolId() + ", BlockID = " + key.getBlockId() +
  " for :" + e.getMessage());
{code}

 # FsDatasetCache.java : reserve() and release() OS page size math is not 
required in FileMappedBlock. Appreciate if you could avoid these calls. Also, 
can you re-visit the caching and un-caching logic(for example, 
datanode.getMetrics() updates etc ) present in this class.
{code:java}
CachingTask#run(){

long newUsedBytes = reserve(length);
...
if (reservedBytes) {
   release(length);
}

UncachingTask#run() {
...
long newUsedBytes = release(value.mappableBlock.getLength());
{code}

 # I have changed jira status and triggered QA. Please fix checkstyle warnings 
and test case failures. Also, can you uncomment {{Test//(timeout=12)}} two 
occurrences in the test.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14357) Update the relevant docs for HDFS cache on SCM support

2019-03-18 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-14357:
--
Attachment: HDFS-14357.000.patch

> Update the relevant docs for HDFS cache on SCM support
> --
>
> Key: HDFS-14357
> URL: https://issues.apache.org/jira/browse/HDFS-14357
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14357.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794829#comment-16794829
 ] 

Hadoop QA commented on HDDS-1297:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 46s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 21s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.om.TestScmChillMode |
|   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2540/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1297 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/att

[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-18 Thread Rakesh R (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794827#comment-16794827
 ] 

Rakesh R edited comment on HDFS-14355 at 3/18/19 8:31 AM:
--

Thanks [~PhiloHe] for the good progress. Adding second set of review comments, 
please go through it.
# Close {{file = new RandomAccessFile(filePath, "rw");}}
{code:java}

IOUtils.closeQuietly(file);
{code}
# Looks like unused code, please remove it.
{code:java}
  private FsDatasetImpl dataset;

  public MemoryMappableBlockLoader(FsDatasetImpl dataset) {
this.dataset = dataset;
  }
{code}
# FileMappableBlockLoader#loadVolumes exception handling. I feel this is not 
required, please remove it. If you still need this for some purpose, then 
please add message arg to {{IOException("Failed to parse persistent memory 
location " + location, e)}}
{code:java}
  } catch (IllegalArgumentException e) {
LOG.error("Failed to parse persistent memory location " + location +
" for " + e.getMessage());
throw new IOException(e);
  }
{code}
# Debuggability: FileMappableBlockLoader#verifyIfValidPmemVolume. Here, add 
exception message arg to {{throw new IOException(t);}}
{code:java}
  throw new IOException(
  "Exception while writing data to persistent storage dir: " + pmemDir,
  t);
{code}
# Debuggability: FileMappableBlockLoader#load. Here, add blockFileName to the 
exception message.
{code:java}
  if (out == null) {
throw new IOException("Fail to map the block " + blockFileName
+ " to persistent storage.");
  }
{code}
# Debuggability: FileMappableBlockLoader#verifyChecksumAndMapBlock
{code:java}
  throw new IOException(
  "checksum verification failed for the blockfile:" + blockFileName
  + ":  premature EOF");
{code}
# FileMappedBlock#afterCache. Suppressing exception may give wrong statistics, 
right? Assume, {{afterCache}} throws exception and not cached the file path. 
Here, the cached block won't be readable but unnecessarily consumes space. How 
about moving {{mappableBlock.afterCache();}} call right after 
{{mappableBlockLoader.load()}} function and add throws IOException clause to 
{{afterCache}} ?
{code:java}
  LOG.warn("Fail to find the replica file of PoolID = " +
  key.getBlockPoolId() + ", BlockID = " + key.getBlockId() +
  " for :" + e.getMessage());
{code}
# FsDatasetCache.java : reserve() and release() OS page size math is not 
required in FileMappedBlock. Appreciate if you could avoid these calls. Also, 
can you re-visit the caching and un-caching logic(for example, 
datanode.getMetrics() updates etc ) present in this class.
{code:java}
CachingTask#run(){

long newUsedBytes = reserve(length);
...
if (reservedBytes) {
   release(length);
}

UncachingTask#run() {
...
long newUsedBytes = release(value.mappableBlock.getLength());
{code}
# I have changed jira status and triggered QA. Please fix checkstyle warnings 
and test case failures. 
Also, can you uncomment {{Test//(timeout=12)}} two occurrences in the test.


was (Author: rakeshr):
Thanks [~PhiloHe] for the good progress. Adding second set of review comments, 
please go through it.
 # Close {{file = new RandomAccessFile(filePath, "rw");}}
{code:java}

IOUtils.closeQuietly(file);
{code}

 # Looks like unused code, please remove it.
{code:java}
  private FsDatasetImpl dataset;

  public MemoryMappableBlockLoader(FsDatasetImpl dataset) {
this.dataset = dataset;
  }
{code}

 # FileMappableBlockLoader#loadVolumes exception handling. I feel this is not 
required, please remove it. If you still need this for some purpose, then 
please add message arg to {{IOException("Failed to parse persistent memory 
location " + location, e)}}
{code:java}
  } catch (IllegalArgumentException e) {
LOG.error("Failed to parse persistent memory location " + location +
" for " + e.getMessage());
throw new IOException(e);
  }
{code}

 # Debuggability: FileMappableBlockLoader#verifyIfValidPmemVolume. Here, add 
exception message arg to {{throw new IOException(t);}}
{code:java}
  throw new IOException(
  "Exception while writing data to persistent storage dir: " + pmemDir,
  t);
{code}

 # Debuggability: FileMappableBlockLoader#load. Here, add blockFileName to the 
exception message.
{code:java}
  if (out == null) {
throw new IOException("Fail to map the block " + blockFileName
+ " to persistent storage.");
  }
{code}

 # Debuggability: FileMappableBlockLoader#verifyChecksumAndMapBlock
{code:java}
  throw new IOException(
  "checksum verification failed for the block

[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214644&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214644
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 09:09
Start Date: 18/Mar/19 09:09
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266346041
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
 
 Review comment:
   This can also be protected.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214644)
Time Spent: 2h 20m  (was: 2h 10m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214645&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214645
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 09:09
Start Date: 18/Mar/19 09:09
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266346141
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
+
+  /**
+   * Cleanup action's need to be done, once this rule is satisfied.
+   */
+  public abstract void cleanup();
 
 Review comment:
   This can also be protected.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214645)
Time Spent: 2.5h  (was: 2h 20m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214643&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214643
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 09:09
Start Date: 18/Mar/19 09:09
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266345988
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
 
 Review comment:
   The method can be made protected and all the implementation can also be 
protected.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214643)
Time Spent: 2h 10m  (was: 2h)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214653&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214653
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 09:10
Start Date: 18/Mar/19 09:10
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266346477
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
+
+  /**
+   * Cleanup action's need to be done, once this rule is satisfied.
+   */
+  public abstract void cleanup();
+
+  @Override
+  public final void onMessage(T report, EventPublisher publisher) {
+
+// TODO: when we have remove handlers, we can remove getInChillmode check
+
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+return;
+  }
+
+  process(report);
 
-  boolean validate();
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+  }
+}
+  }
 
-  void process(T report);
+  /**
+   * Return SCMChillModeManager.
+   * @return SCMChillModeManager
+   */
+  public SCMChillModeManager getChillModeManager() {
+return chillModeManager;
+  }
 
 Review comment:
   We can replace this with 
   `protected boolean scmInChillMode() {
return chillModeManager.getInChillMode();
   }`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214653)
Time Spent: 2h 50m  (was: 2h 40m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214652&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214652
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 09:10
Start Date: 18/Mar/19 09:10
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266346477
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
+
+  /**
+   * Cleanup action's need to be done, once this rule is satisfied.
+   */
+  public abstract void cleanup();
+
+  @Override
+  public final void onMessage(T report, EventPublisher publisher) {
+
+// TODO: when we have remove handlers, we can remove getInChillmode check
+
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+return;
+  }
+
+  process(report);
 
-  boolean validate();
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+  }
+}
+  }
 
-  void process(T report);
+  /**
+   * Return SCMChillModeManager.
+   * @return SCMChillModeManager
+   */
+  public SCMChillModeManager getChillModeManager() {
+return chillModeManager;
+  }
 
 Review comment:
   We can replace this with 
   `protected boolean scmInChillMode() {
return chillModeManager..getInChillMode();
   }`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214652)
Time Spent: 2h 40m  (was: 2.5h)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214654&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214654
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 09:14
Start Date: 18/Mar/19 09:14
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266346477
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
+
+  /**
+   * Cleanup action's need to be done, once this rule is satisfied.
+   */
+  public abstract void cleanup();
+
+  @Override
+  public final void onMessage(T report, EventPublisher publisher) {
+
+// TODO: when we have remove handlers, we can remove getInChillmode check
+
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+return;
+  }
+
+  process(report);
 
-  boolean validate();
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+  }
+}
+  }
 
-  void process(T report);
+  /**
+   * Return SCMChillModeManager.
+   * @return SCMChillModeManager
+   */
+  public SCMChillModeManager getChillModeManager() {
+return chillModeManager;
+  }
 
 Review comment:
   We can replace this with 
   ```protected boolean scmInChillMode() {
return chillModeManager.getInChillMode();
   }```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214654)
Time Spent: 3h  (was: 2h 50m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-03-18 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794857#comment-16794857
 ] 

Sammi Chen commented on HDDS-699:
-

Yes, the test failures are not relative. I will commit the patch to trunk soon.

Thanks [~szetszwo], [~xyao], [~junjie] and [~linyiqun] for all your time on 
reviewing the patch.

 

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch, HDDS-699.05.patch, HDDS-699.06.patch, 
> HDDS-699.07.patch, HDDS-699.08.patch, HDDS-699.09.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214655&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214655
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 09:14
Start Date: 18/Mar/19 09:14
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266346477
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
+
+  /**
+   * Cleanup action's need to be done, once this rule is satisfied.
+   */
+  public abstract void cleanup();
+
+  @Override
+  public final void onMessage(T report, EventPublisher publisher) {
+
+// TODO: when we have remove handlers, we can remove getInChillmode check
+
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+return;
+  }
+
+  process(report);
 
-  boolean validate();
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+  }
+}
+  }
 
-  void process(T report);
+  /**
+   * Return SCMChillModeManager.
+   * @return SCMChillModeManager
+   */
+  public SCMChillModeManager getChillModeManager() {
+return chillModeManager;
+  }
 
 Review comment:
   We can replace this with 
   ```
   protected boolean scmInChillMode() {
return chillModeManager.getInChillMode();
   }
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214655)
Time Spent: 3h 10m  (was: 3h)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-18 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794865#comment-16794865
 ] 

Nanda kumar commented on HDDS-1263:
---

According to {{listContainer}} documentation
{noformat}
/**
   * Returns containers under certain conditions.
   * Search container IDs from start ID(exclusive),
   * The max size of the searching range cannot exceed the
   * value of count.
   *
   * @param startContainerID start containerID, >=0,
   * start searching at the head if 0.
   * @param count count must be >= 0
   *  Usually the count will be replace with a very big
   *  value instead of being unlimited in case the db is very big.
   *
   * @return a list of container.
   * @throws IOException
{noformat}
Start ID should be excluded from the result, this patch breaks that behavior.

A better way to fix this is to change {{SCMClientProtocolServer}} to handle '0'
{code:java}
public List listContainer(long startContainerID, int count) 
throws IOException {
..
  final ContainerID containerId = startContainerID != 0 ? 
ContainerID.valueof(startContainerID) : null;
  return scm.getContainerManager().listContainer(containerId, count);
..
}
{code}

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-18 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1297:

Attachment: HDDS-1297.002.patch

> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> 
>
> Key: HDDS-1297
> URL: https://issues.apache.org/jira/browse/HDDS-1297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1297.001.patch, HDDS-1297.002.patch
>
>
> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> {code}
> ava.lang.IllegalArgumentException: 30 is not within min = 500 or max = 
> 10
>   at 
> org.apache.hadoop.hdds.server.ServerUtils.sanitizeUserArgs(ServerUtils.java:66)
>   at 
> org.apache.hadoop.hdds.scm.HddsServerUtil.getStaleNodeInterval(HddsServerUtil.java:256)
>   at 
> org.apache.hadoop.hdds.scm.node.NodeStateManager.(NodeStateManager.java:136)
>   at 
> org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:105)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.initalizeSystemManagers(StorageContainerManager.java:391)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:286)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:218)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:684)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:628)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createSCM(MiniOzoneClusterImpl.java:458)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:392)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBothGetandPutSmallFile(TestOzoneContainer.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-18 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794873#comment-16794873
 ] 

Yiqun Lin commented on HDDS-1297:
-

Attach the patch to fix similar problem in 
{{TestStorageContainerManager#testBlockDeletionTransactions}}.

> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> 
>
> Key: HDDS-1297
> URL: https://issues.apache.org/jira/browse/HDDS-1297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1297.001.patch, HDDS-1297.002.patch
>
>
> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> {code}
> ava.lang.IllegalArgumentException: 30 is not within min = 500 or max = 
> 10
>   at 
> org.apache.hadoop.hdds.server.ServerUtils.sanitizeUserArgs(ServerUtils.java:66)
>   at 
> org.apache.hadoop.hdds.scm.HddsServerUtil.getStaleNodeInterval(HddsServerUtil.java:256)
>   at 
> org.apache.hadoop.hdds.scm.node.NodeStateManager.(NodeStateManager.java:136)
>   at 
> org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:105)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.initalizeSystemManagers(StorageContainerManager.java:391)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:286)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:218)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:684)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:628)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createSCM(MiniOzoneClusterImpl.java:458)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:392)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBothGetandPutSmallFile(TestOzoneContainer.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-699) Detect Ozone Network topology

2019-03-18 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-699:

  Resolution: Fixed
Release Note: Support a flexible multi-level network topology 
implementation.
  Status: Resolved  (was: Patch Available)

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch, HDDS-699.05.patch, HDDS-699.06.patch, 
> HDDS-699.07.patch, HDDS-699.08.patch, HDDS-699.09.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-699) Detect Ozone Network topology

2019-03-18 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-699:

Release Note: Support a flexible multi-level network topology 
implementation  (was: Support a flexible multi-level network topology 
implementation.)

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch, HDDS-699.05.patch, HDDS-699.06.patch, 
> HDDS-699.07.patch, HDDS-699.08.patch, HDDS-699.09.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-699) Detect Ozone Network topology

2019-03-18 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-699:

Fix Version/s: 0.5.0

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch, HDDS-699.05.patch, HDDS-699.06.patch, 
> HDDS-699.07.patch, HDDS-699.08.patch, HDDS-699.09.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2019-03-18 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794885#comment-16794885
 ] 

Feilong He commented on HDFS-13762:
---

For the 4th subtask HDFS-14357, an initial patch has been uploaded. This patch 
just updates the relevant docs. Thanks!

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> HDFS-13762.008.patch, SCMCacheDesign-2018-11-08.pdf, SCMCacheTestPlan.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-03-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794886#comment-16794886
 ] 

Hudson commented on HDDS-699:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16227 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16227/])
HDDS-699. Detect Ozone Network topology. Contributed by Sammi Chen. (sammichen: 
rev 4d2a116d6ef865c29d0df0a743e91874942af412)
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NetworkTopology.java
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/no-leaf.xml
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/invalid-version.xml
* (add) hadoop-hdds/common/src/main/resources/network-topology-default.xml
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NetworkTopologyImpl.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NetConstants.java
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/enforce-error.xml
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeSchema.java
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/no-root.xml
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/path-layers-size-mismatch.xml
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/InnerNodeImpl.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/Node.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/package-info.java
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/multiple-topology.xml
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeSchemaLoader.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/scm/net/TestNodeSchemaLoader.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NetUtils.java
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/path-with-id-reference-failure.xml
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/wrong-path-order-1.xml
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/scm/net/TestNodeSchemaManager.java
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/multiple-root.xml
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/unknown-layer-type.xml
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/invalid-cost.xml
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeImpl.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/InnerNode.java
* (add) hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/good.xml
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/multiple-leaf.xml
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/scm/net/TestNetworkTopologyImpl.java
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/no-topology.xml
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/net/NodeSchemaManager.java
* (add) hadoop-hdds/common/src/main/resources/network-topology-nodegroup.xml
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (add) 
hadoop-hdds/common/src/test/resources/networkTopologyTestFiles/wrong-path-order-2.xml


> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch, HDDS-699.05.patch, HDDS-699.06.patch, 
> HDDS-699.07.patch, HDDS-699.08.patch, HDDS-699.09.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-03-18 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794909#comment-16794909
 ] 

Tsz Wo Nicholas Sze commented on HDDS-699:
--

Thank you, [~Sammi].

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch, HDDS-699.05.patch, HDDS-699.06.patch, 
> HDDS-699.07.patch, HDDS-699.08.patch, HDDS-699.09.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-18 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794911#comment-16794911
 ] 

Mukul Kumar Singh commented on HDDS-1297:
-

Thanks for the updated patch [~linyiqun]. There are lots of other tests which 
are still failing with the same exception. (TestSCMNodeManager, 
TestOzoneRestWithMiniCluster, TestOzoneClient).

Should we try increasing the maxFactor in sanitizeUserArgs, which is being 
called from getStaleNodeInterval. ? 


> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> 
>
> Key: HDDS-1297
> URL: https://issues.apache.org/jira/browse/HDDS-1297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1297.001.patch, HDDS-1297.002.patch
>
>
> Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization
> {code}
> ava.lang.IllegalArgumentException: 30 is not within min = 500 or max = 
> 10
>   at 
> org.apache.hadoop.hdds.server.ServerUtils.sanitizeUserArgs(ServerUtils.java:66)
>   at 
> org.apache.hadoop.hdds.scm.HddsServerUtil.getStaleNodeInterval(HddsServerUtil.java:256)
>   at 
> org.apache.hadoop.hdds.scm.node.NodeStateManager.(NodeStateManager.java:136)
>   at 
> org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:105)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.initalizeSystemManagers(StorageContainerManager.java:391)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:286)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:218)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:684)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:628)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createSCM(MiniOzoneClusterImpl.java:458)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:392)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBothGetandPutSmallFile(TestOzoneContainer.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-18 Thread Rakesh R (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794924#comment-16794924
 ] 

Rakesh R commented on HDFS-14355:
-

{quote}This property specifies the cache capactiy for both memory & pmem. We 
kept same behavior upon the specified cache capacity for pmem cache as that for 
memory cache.
{quote}
Please look at my above comment#8. As we know the existing code deals with only 
the OS page cache, but now adding pmem as well and requires special 
intelligence to manage the stats/overflows if we allow to plug in two entities 
together. Just a quick thought is, to add new configuration 
{{dfs.datanode.cache.pmem.capacity}} and reserve/release logic can be moved to 
specific MappableBlockLoader's.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-03-18 Thread Jihyun Cho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jihyun Cho updated HDFS-14375:
--
Description: 
Let me explain the environment for a description.

{noformat}
KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
   | |
NameNode1 NameNode2
   | |
   -- DataNodes (federated) --
{noformat}

We configured the secure clusters and federated them.
* Principal
** NameNode1 : nn/_h...@test1.com 
** NameNode2 : nn/_h...@test2.com 
** DataNodes : dn/_h...@test2.com 

But DataNodes could not connect to NameNode1 with below error.

{noformat}
WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-datanode.test@test2.com (auth:KERBEROS) 
for protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: 
this service is only accessible by dn/hadoop-datanode.test@test1.com
{noformat}

We have avoided the error with attached patch.
The patch checks only using {{username}} and {{hostname}} except {{realm}}.
I think there is no problem. Because if realms are different and no cross-realm 
setting, they cannot communication each other. If you are worried about this, 
please let me know.

In the long run, it would be better if I could set multiple realms for 
authorize. Like this;

{noformat}

  dfs.namenode.kerberos.trust-realms
  TEST1.COM,TEST2.COM

{noformat}


  was:
Let me explain the environment for a description.

{noformat}
KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
   | |
NameNode1 NameNode2
   | |
   -- DataNodes (federated) --
{noformat}

We configured the secure clusters and federated them.
But DataNodes could not connect to NameNode1 with below error.

{noformat}
WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-datanode.test@test2.com (auth:KERBEROS) 
for protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: 
this service is only accessible by dn/hadoop-datanode.test@test1.com
{noformat}

We have avoided the error with attached patch.
The patch checks only using {{username}} and {{hostname}} except {{realm}}.
I think there is no problem. Because if realms are different and no cross-realm 
setting, they cannot communication each other. If you are worried about this, 
please let me know.

In the long run, it would be better if I could set multiple realms for 
authorize. Like this;

{noformat}

  dfs.namenode.kerberos.trust-realms
  TEST1.COM,TEST2.COM

{noformat}



> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> 
>
> Key: HDFS-14375
> URL: https://issues.apache.org/jira/browse/HDFS-14375
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.1
>Reporter: Jihyun Cho
>Priority: Major
> Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>| |
> NameNode1 NameNode2
>| |
>-- DataNodes (federated) --
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_h...@test1.com 
> ** NameNode2 : nn/_h...@test2.com 
> ** DataNodes : dn/_h...@test2.com 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for dn/hadoop-datanode.test@test2.com 
> (auth:KERBEROS) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/hadoop-datanode.test@test1.com
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no 
> cross-realm setting, they cannot communication each other. If you are worried 
> about this, please let me know.
> In the long run, it would be better if I could set multiple realms for 
> authorize. Like this;
> {noformat}
> 
>   dfs.namenode.kerberos.trust-realms
>   TEST1.COM,TEST2.COM
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr

[jira] [Commented] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794930#comment-16794930
 ] 

Hadoop QA commented on HDDS-1297:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 50s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 31s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.om.TestScmChillMode |
|   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.scm.node.TestQueryNode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2541/artifact/out/Dockerfile 
|
| J

[jira] [Created] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm

2019-03-18 Thread Jihyun Cho (JIRA)
Jihyun Cho created HDFS-14375:
-

 Summary: DataNode cannot serve BlockPool to multiple NameNodes in 
the different realm
 Key: HDFS-14375
 URL: https://issues.apache.org/jira/browse/HDFS-14375
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 3.1.1
Reporter: Jihyun Cho
 Attachments: authorize.patch

Let me explain the environment for a description.

{noformat}
KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
   | |
NameNode1 NameNode2
   | |
   -- DataNodes (federated) --
{noformat}

We configured the secure clusters and federated them.
But DataNodes could not connect to NameNode1 with below error.

{noformat}
WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for dn/hadoop-datanode.test@test2.com (auth:KERBEROS) 
for protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: 
this service is only accessible by dn/hadoop-datanode.test@test1.com
{noformat}

We have avoided the error with attached patch.
The patch checks only using {{username}} and {{hostname}} except {{realm}}.
I think there is no problem. Because if realms are different and no cross-realm 
setting, they cannot communication each other. If you are worried about this, 
please let me know.

In the long run, it would be better if I could set multiple realms for 
authorize. Like this;

{noformat}

  dfs.namenode.kerberos.trust-realms
  TEST1.COM,TEST2.COM

{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-18 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794975#comment-16794975
 ] 

Feilong He commented on HDFS-14355:
---

Thanks [~rakeshr] for your valuable comment. I got your point about the 
configuration for cache capacity of pmem. As synced with you, actually it is 
not reasonable to make pmem share this configuration with memory in the current 
implementation, since DataNode will also use this configuration to control 
memory usage for Lazy Persist Writes. I will update the patch for fixing this 
potential critical issue and other issues put forward in your other comments. 
Thanks so much!

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14369) RBF: Fix trailing "/" for webhdfs

2019-03-18 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794977#comment-16794977
 ] 

Ayush Saxena commented on HDFS-14369:
-

Thanx [~ajisakaa] and [~crh] for the analysis.

Seems like the mount table resolver doesn't tend to remove the leading slash 
while getting the mount entry and believes in matching the entry name exactly.

Either we can handle the leading slash at the \{{MountTableResolver.java}} or 
even at getListing() at L681.

I tried with this at \{{MountTableResolver.java}} and the test passed for me.

 
{code:java}
 public List getMountPoints(String path) throws IOException {
    verifyMountTable();
    path = path.replaceAll(".+/$", path.substring(0, path.length() - 1));
{code}
 

 

> RBF: Fix trailing "/" for webhdfs
> -
>
> Key: HDFS-14369
> URL: https://issues.apache.org/jira/browse/HDFS-14369
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14369-HDFS-13891-regressiontest-001.patch
>
>
> WebHDFS doesn't trim trailing slash causing discrepancy in operations.
> Example below
> --
> Using HDFS API, two directory are listed.
> {code}
> $ hdfs dfs -ls hdfs://:/tmp/
> Found 2 items
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp1
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp2
> {code}
> Using WebHDFS API, only one directory is listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp/?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1552016766769,"owner":"hdfs","pathSuffix":"tmp1","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
> ]}}
> {code}
> The mount table is as follows:
> {code}
> $ hdfs dfsrouteradmin -ls /tmp
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /tmp  ns1->/tmp aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp1 ns1->/tmp/tmp1aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp2 ns2->/tmp/tmp2aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {code}
> Without trailing thrash, two directories are listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1541753421917,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753421917,"owner":"hdfs","pathSuffix":"tmp1","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"},
> {"accessTime":1541753429812,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753429812,"owner":"hdfs","pathSuffix":"tmp2","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"}
> ]}}
> {code}
> [~ajisakaa] Thanks for reporting this, I borrowed the text from 
> HDFS-13972



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-18 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794975#comment-16794975
 ] 

Feilong He edited comment on HDFS-14355 at 3/18/19 12:40 PM:
-

Thanks [~rakeshr] for your valuable comment. I got your point about the 
configuration for cache capacity of pmem. As synced with you, actually it is 
not reasonable to make pmem share such configuration with memory in the current 
implementation, since DataNode will also use this configuration to control 
memory usage for Lazy Persist Writes. I will update the patch for fixing this 
potential critical issue and other issues put forward in your other comments. 
Thanks so much!


was (Author: philohe):
Thanks [~rakeshr] for your valuable comment. I got your point about the 
configuration for cache capacity of pmem. As synced with you, actually it is 
not reasonable to make pmem share this configuration with memory in the current 
implementation, since DataNode will also use this configuration to control 
memory usage for Lazy Persist Writes. I will update the patch for fixing this 
potential critical issue and other issues put forward in your other comments. 
Thanks so much!

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14357) Update the relevant docs for HDFS cache on SCM support

2019-03-18 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794983#comment-16794983
 ] 

Feilong He commented on HDFS-14357:
---

The property dfs.datanode.max.locked.memory should not be shared by memory and 
pmem for controlling the capacity as mentioned by [~rakeshr] in HDFS-14355. I 
am still fixing this issue and the patch for this subtask will be updated 
accordingly!

> Update the relevant docs for HDFS cache on SCM support
> --
>
> Key: HDFS-14357
> URL: https://issues.apache.org/jira/browse/HDFS-14357
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14357.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1205) Refactor ReplicationManager to handle QUASI_CLOSED containers

2019-03-18 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1205:
--
Summary: Refactor ReplicationManager to handle QUASI_CLOSED containers  
(was: Introduce Replication Manager Thread inside Container Manager)

> Refactor ReplicationManager to handle QUASI_CLOSED containers
> -
>
> Key: HDDS-1205
> URL: https://issues.apache.org/jira/browse/HDDS-1205
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-1205.000.patch, HDDS-1205.001.patch
>
>
> This jira introduces a replication manager thread inside the 
> {{ContainerManager}} which will use RMT (Replication Manager Thread) Decision 
> Engine to decide the action to be taken on flagged containers.
> The containers are flagged for ReplicationManagerThread by 
> ContainerReportProcessor(s) and Stale/Dead Node event handlers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1205) Refactor ReplicationManager to handle QUASI_CLOSED containers

2019-03-18 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1205:
--
Description: 
This Jira is for refactoring the ReplicationManager code to handle all the 
scenarios that are possible with the introduction of QUASI_CLOSED state of a 
container.

The new ReplicationManager will go through the complete set of containers in 
SCM to find out under/over replicated and unhealthy containers and takes 
appropriate action.

  was:
This jira introduces a replication manager thread inside the 
{{ContainerManager}} which will use RMT (Replication Manager Thread) Decision 
Engine to decide the action to be taken on flagged containers.
The containers are flagged for ReplicationManagerThread by 
ContainerReportProcessor(s) and Stale/Dead Node event handlers.


> Refactor ReplicationManager to handle QUASI_CLOSED containers
> -
>
> Key: HDDS-1205
> URL: https://issues.apache.org/jira/browse/HDDS-1205
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-1205.000.patch, HDDS-1205.001.patch
>
>
> This Jira is for refactoring the ReplicationManager code to handle all the 
> scenarios that are possible with the introduction of QUASI_CLOSED state of a 
> container.
> The new ReplicationManager will go through the complete set of containers in 
> SCM to find out under/over replicated and unhealthy containers and takes 
> appropriate action.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1205) Refactor ReplicationManager to handle QUASI_CLOSED containers

2019-03-18 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795050#comment-16795050
 ] 

Nanda kumar commented on HDDS-1205:
---

Based on the offline discussion with [~anu], [~arpitagarwal] and [~msingh] 
changed the Jira summary and description. Will upload a new patch shortly.

> Refactor ReplicationManager to handle QUASI_CLOSED containers
> -
>
> Key: HDDS-1205
> URL: https://issues.apache.org/jira/browse/HDDS-1205
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-1205.000.patch, HDDS-1205.001.patch
>
>
> This Jira is for refactoring the ReplicationManager code to handle all the 
> scenarios that are possible with the introduction of QUASI_CLOSED state of a 
> container.
> The new ReplicationManager will go through the complete set of containers in 
> SCM to find out under/over replicated and unhealthy containers and takes 
> appropriate action.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1300) Optimize non-recursive ozone filesystem apis

2019-03-18 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1300:
-

 Summary: Optimize non-recursive ozone filesystem apis
 Key: HDDS-1300
 URL: https://issues.apache.org/jira/browse/HDDS-1300
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Filesystem, Ozone Manager
Reporter: Lokesh Jain
Assignee: Lokesh Jain


This Jira aims to optimise non recursive apis in ozone file system. The Jira 
would add support for such apis in Ozone manager in order to reduce the number 
of rpc calls to Ozone Manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1301) Optimize recursive ozone filesystem apis

2019-03-18 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1301:
-

 Summary: Optimize recursive ozone filesystem apis
 Key: HDDS-1301
 URL: https://issues.apache.org/jira/browse/HDDS-1301
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Lokesh Jain
Assignee: Lokesh Jain


This Jira aims to optimise recursive apis in ozone file system. These are the 
apis which have a recursive flag which requires an operation to be performed on 
all the children of the directory. The Jira would add support for recursive 
apis in Ozone manager in order to reduce the number of rpc calls to Ozone 
Manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14328) [Clean-up] Remove NULL check before instanceof in TestGSet

2019-03-18 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-14328:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

+1  Thanks for the patch, [~shwetayakkali].  Committed to trunk.

> [Clean-up] Remove NULL check before instanceof in TestGSet
> --
>
> Key: HDFS-14328
> URL: https://issues.apache.org/jira/browse/HDFS-14328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14328.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1301) Optimize recursive ozone filesystem apis

2019-03-18 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1301:
--
Description: This Jira aims to optimise recursive apis in ozone file 
system. These are the apis which have a recursive flag which requires an 
operation to be performed on all the children of the directory. The Jira would 
add support for recursive apis in Ozone manager in order to reduce the number 
of rpc calls to Ozone Manager. Also currently these operations are not atomic. 
This Jira would make all the operations in ozone filesystem atomic.  (was: This 
Jira aims to optimise recursive apis in ozone file system. These are the apis 
which have a recursive flag which requires an operation to be performed on all 
the children of the directory. The Jira would add support for recursive apis in 
Ozone manager in order to reduce the number of rpc calls to Ozone Manager.)

> Optimize recursive ozone filesystem apis
> 
>
> Key: HDDS-1301
> URL: https://issues.apache.org/jira/browse/HDDS-1301
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>
> This Jira aims to optimise recursive apis in ozone file system. These are the 
> apis which have a recursive flag which requires an operation to be performed 
> on all the children of the directory. The Jira would add support for 
> recursive apis in Ozone manager in order to reduce the number of rpc calls to 
> Ozone Manager. Also currently these operations are not atomic. This Jira 
> would make all the operations in ozone filesystem atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14328) [Clean-up] Remove NULL check before instanceof in TestGSet

2019-03-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795069#comment-16795069
 ] 

Hudson commented on HDFS-14328:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16231 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16231/])
HDFS-14328. [Clean-up] Remove NULL check before instanceof in TestGSet 
(templedf: rev 2db38abffcd89bf1fa0cad953254daea7e4e415b)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGSet.java


> [Clean-up] Remove NULL check before instanceof in TestGSet
> --
>
> Key: HDFS-14328
> URL: https://issues.apache.org/jira/browse/HDFS-14328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDFS-14328.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis

2019-03-18 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1300:
--
Attachment: HDDS-1300.001.patch

> Optimize non-recursive ozone filesystem apis
> 
>
> Key: HDDS-1300
> URL: https://issues.apache.org/jira/browse/HDDS-1300
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem, Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1300.001.patch
>
>
> This Jira aims to optimise non recursive apis in ozone file system. The Jira 
> would add support for such apis in Ozone manager in order to reduce the 
> number of rpc calls to Ozone Manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis

2019-03-18 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795078#comment-16795078
 ] 

Lokesh Jain commented on HDDS-1300:
---

v1 patch adds support for createDirectory api. Other apis would be added in 
later patch. The patch can be submitted after HDDS-1185.

> Optimize non-recursive ozone filesystem apis
> 
>
> Key: HDDS-1300
> URL: https://issues.apache.org/jira/browse/HDDS-1300
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem, Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1300.001.patch
>
>
> This Jira aims to optimise non recursive apis in ozone file system. The Jira 
> would add support for such apis in Ozone manager in order to reduce the 
> number of rpc calls to Ozone Manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1205) Refactor ReplicationManager to handle QUASI_CLOSED containers

2019-03-18 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1205:
--
Attachment: HDDS-1205.002.patch

> Refactor ReplicationManager to handle QUASI_CLOSED containers
> -
>
> Key: HDDS-1205
> URL: https://issues.apache.org/jira/browse/HDDS-1205
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-1205.000.patch, HDDS-1205.001.patch, 
> HDDS-1205.002.patch
>
>
> This Jira is for refactoring the ReplicationManager code to handle all the 
> scenarios that are possible with the introduction of QUASI_CLOSED state of a 
> container.
> The new ReplicationManager will go through the complete set of containers in 
> SCM to find out under/over replicated and unhealthy containers and takes 
> appropriate action.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1205) Refactor ReplicationManager to handle QUASI_CLOSED containers

2019-03-18 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795085#comment-16795085
 ] 

Nanda kumar commented on HDDS-1205:
---

Uploaded patch v02 for initial review, will update the patch with more unit 
tests.

> Refactor ReplicationManager to handle QUASI_CLOSED containers
> -
>
> Key: HDDS-1205
> URL: https://issues.apache.org/jira/browse/HDDS-1205
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-1205.000.patch, HDDS-1205.001.patch, 
> HDDS-1205.002.patch
>
>
> This Jira is for refactoring the ReplicationManager code to handle all the 
> scenarios that are possible with the introduction of QUASI_CLOSED state of a 
> container.
> The new ReplicationManager will go through the complete set of containers in 
> SCM to find out under/over replicated and unhealthy containers and takes 
> appropriate action.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14349) Edit log may be rolled more frequently than necessary with multiple Standby nodes

2019-03-18 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795105#comment-16795105
 ] 

Erik Krogen commented on HDFS-14349:


Hi [~starphin], thanks for the input. I agree that the _auto-roll_ operation is 
performed by the ANN. However, SbNNs trigger the original/"normal" edit log 
roll via {{NamenodeProtocol#rollEditLog()}}. This is triggered by the 
{{EditLogTailer}} on the SbNN which has a {{rollEditsRpcExecutor}}.

> Edit log may be rolled more frequently than necessary with multiple Standby 
> nodes
> -
>
> Key: HDFS-14349
> URL: https://issues.apache.org/jira/browse/HDFS-14349
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Ekanth Sethuramalingam
>Priority: Major
>
> When HDFS-14317 was fixed, we tackled the problem that in a cluster with 
> in-progress edit log tailing enabled, a Standby NameNode may _never_ roll the 
> edit logs, which can eventually cause data loss.
> Unfortunately, in the process, it was made so that if there are multiple 
> Standby NameNodes, they will all roll the edit logs at their specified 
> frequency, so the edit log will be rolled X times more frequently than they 
> should be (where X is the number of Standby NNs). This is not as bad as the 
> original bug since rolling frequently does not affect correctness or data 
> availability, but may degrade performance by creating more edit log segments 
> than necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1296) Fix checkstyle issue from Nightly run

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1296?focusedWorklogId=214833&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214833
 ]

ASF GitHub Bot logged work on HDDS-1296:


Author: ASF GitHub Bot
Created on: 18/Mar/19 15:38
Start Date: 18/Mar/19 15:38
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #616: HDDS-1296. Fix 
checkstyle issue from Nightly run. Contributed by Xiao…
URL: https://github.com/apache/hadoop/pull/616#issuecomment-473964834
 
 
   +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214833)
Time Spent: 50m  (was: 40m)

> Fix checkstyle issue from Nightly run
> -
>
> Key: HDDS-1296
> URL: https://issues.apache.org/jira/browse/HDDS-1296
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone-nightly/32/checkstyle/moduleName.588460772/fileName.-1184872187/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1296) Fix checkstyle issue from Nightly run

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1296?focusedWorklogId=214836&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214836
 ]

ASF GitHub Bot logged work on HDDS-1296:


Author: ASF GitHub Bot
Created on: 18/Mar/19 15:38
Start Date: 18/Mar/19 15:38
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #616: HDDS-1296. Fix 
checkstyle issue from Nightly run. Contributed by Xiao…
URL: https://github.com/apache/hadoop/pull/616
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214836)
Time Spent: 1h  (was: 50m)

> Fix checkstyle issue from Nightly run
> -
>
> Key: HDDS-1296
> URL: https://issues.apache.org/jira/browse/HDDS-1296
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone-nightly/32/checkstyle/moduleName.588460772/fileName.-1184872187/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1296) Fix checkstyle issue from Nightly run

2019-03-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1296:
-
Fix Version/s: 0.4.0

> Fix checkstyle issue from Nightly run
> -
>
> Key: HDDS-1296
> URL: https://issues.apache.org/jira/browse/HDDS-1296
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone-nightly/32/checkstyle/moduleName.588460772/fileName.-1184872187/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1296) Fix checkstyle issue from Nightly run

2019-03-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar resolved HDDS-1296.
--
Resolution: Fixed

[~xyao] thanks for taking care of this.

> Fix checkstyle issue from Nightly run
> -
>
> Key: HDDS-1296
> URL: https://issues.apache.org/jira/browse/HDDS-1296
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone-nightly/32/checkstyle/moduleName.588460772/fileName.-1184872187/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-18 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795162#comment-16795162
 ] 

Bharat Viswanadham commented on HDDS-1263:
--

[~nandakumar131] thanks for the review.

I have opened the Jira HDDS-1302 to handle your comments.

 

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1302) Fix SCM CLI does not list container with id 1

2019-03-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-1302:


Assignee: Vivek Ratnavel Subramanian  (was: Bharat Viswanadham)

> Fix SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1302
> URL: https://issues.apache.org/jira/browse/HDDS-1302
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> In HDDS-1263 it is changed to handle the list containers with containerID 1 
> by changing the actual logic of listContainers in ScmContainerManager.java. 
> But now with this change, it is contradicting with the javadoc.
> From [~nandakumar131] comments
> https://issues.apache.org/jira/browse/HDDS-1263?focusedCommentId=16794865&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16794865
>  
> I agree this will be the way to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1296) Fix checkstyle issue from Nightly run

2019-03-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795151#comment-16795151
 ] 

Hudson commented on HDDS-1296:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16233 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16233/])
HDDS-1296. Fix checkstyle issue from Nightly run. Contributed by Xiaoyu 
(7813154+ajayydv: rev 66a104bc57242b2817e8c1f67cb6776e70cf5c47)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyLocationInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ScmClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java


> Fix checkstyle issue from Nightly run
> -
>
> Key: HDDS-1296
> URL: https://issues.apache.org/jira/browse/HDDS-1296
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone-nightly/32/checkstyle/moduleName.588460772/fileName.-1184872187/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1302) Fix SCM CLI does not list container with id 1

2019-03-18 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1302:
-
Target Version/s: 0.4.0
Priority: Minor  (was: Major)

> Fix SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1302
> URL: https://issues.apache.org/jira/browse/HDDS-1302
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> In HDDS-1263 it is changed to handle the list containers with containerID 1 
> by changing the actual logic of listContainers in ScmContainerManager.java. 
> But now with this change, it is contradicting with the javadoc.
> From [~nandakumar131] comments
> https://issues.apache.org/jira/browse/HDDS-1263?focusedCommentId=16794865&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16794865
>  
> I agree this will be the way to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1302) Fix SCM CLI does not list container with id 1

2019-03-18 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1302:


 Summary: Fix SCM CLI does not list container with id 1
 Key: HDDS-1302
 URL: https://issues.apache.org/jira/browse/HDDS-1302
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In HDDS-1263 it is changed to handle the list containers with containerID 1 by 
changing the actual logic of listContainers in ScmContainerManager.java. But 
now with this change, it is contradicting with the javadoc.

>From [~nandakumar131] comments

https://issues.apache.org/jira/browse/HDDS-1263?focusedCommentId=16794865&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16794865

 

I agree this will be the way to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-03-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795182#comment-16795182
 ] 

Íñigo Goiri commented on HDFS-13972:


Thanks [~surendrasingh] for bringing this up.
I think we should do it here; this is the basis of DT in WebHDFS.

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch, 
> HDFS-13972-HDFS-13891.002.patch, HDFS-13972-HDFS-13891.003.patch, 
> HDFS-13972-HDFS-13891.004.patch, HDFS-13972-HDFS-13891.005.patch, 
> HDFS-13972-HDFS-13891.006.patch, HDFS-13972-HDFS-13891.007.patch, 
> HDFS-13972-HDFS-13891.008.patch, HDFS-13972-HDFS-13891.009.patch, 
> TestRouterWebHDFSContractTokens.java
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14327) Support security for DNS resolving

2019-03-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795196#comment-16795196
 ] 

Íñigo Goiri commented on HDFS-14327:


Thanks [~fengnanli] for  [^HDFS-14327.001.patch].
We may want to rename the JIRA to something about using FQDN instead of IPs
Right now there is no much about security itself.
If we want to go there, we probably would need a full test which leverage 
MiniKDC or so.

Regarding the patch itself, I would like to keep the old tests with addresses 
and add the FQDNs ones separately.
Can we also fix the check styles and complete the javadocs?

> Support security for DNS resolving
> --
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14327.001.patch
>
>
> The DNS resolving, clients will get the IP of the servers (NN/Routers) and 
> use the IP addresses to access the machine. This will fail in secure 
> environment as Kerberos is using the domain name in the principal so it won't 
> recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1303) add native acl support for OM operations

2019-03-18 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-1303:


 Summary: add native acl support for OM operations
 Key: HDDS-1303
 URL: https://issues.apache.org/jira/browse/HDDS-1303
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1303) add native acl support for OM operations

2019-03-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1303:
-
Description: add native acl support for OM operations

> add native acl support for OM operations
> 
>
> Key: HDDS-1303
> URL: https://issues.apache.org/jira/browse/HDDS-1303
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Priority: Major
>
> add native acl support for OM operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1303) add native acl support for OM operations

2019-03-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDDS-1303:


Assignee: Ajay Kumar

> add native acl support for OM operations
> 
>
> Key: HDDS-1303
> URL: https://issues.apache.org/jira/browse/HDDS-1303
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> add native acl support for OM operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1302) Fix SCM CLI does not list container with id 1

2019-03-18 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1302:
---
Target Version/s: 0.5.0  (was: 0.4.0)

> Fix SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1302
> URL: https://issues.apache.org/jira/browse/HDDS-1302
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> In HDDS-1263 it is changed to handle the list containers with containerID 1 
> by changing the actual logic of listContainers in ScmContainerManager.java. 
> But now with this change, it is contradicting with the javadoc.
> From [~nandakumar131] comments
> https://issues.apache.org/jira/browse/HDDS-1263?focusedCommentId=16794865&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16794865
>  
> I agree this will be the way to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1246) Add ozone delegation token utility subcmd for Ozone CLI

2019-03-18 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1246:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

> Add ozone delegation token utility subcmd for Ozone CLI
> ---
>
> Key: HDDS-1246
> URL: https://issues.apache.org/jira/browse/HDDS-1246
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This allow running dtutil with integration test and dev test for demo of 
> Ozone security.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214890&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214890
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 18/Mar/19 17:12
Start Date: 18/Mar/19 17:12
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #591: HDDS-1250: IIn 
OM HA AllocateBlock call where connecting to SCM from OM should not happen on 
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-474013221
 
 
   Thank You @hanishakoneru  for the review.
   I have addressed the review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214890)
Time Spent: 3h 20m  (was: 3h 10m)

> In OM HA AllocateBlock call where connecting to SCM from OM should not happen 
> on Ratis
> --
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on 
> all OM nodes, we make a call to SCM and write the allocateBlock information 
> into OM DB. The problem with this is, every OM allocateBlock and appends new 
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's 
> should have the same block information for a key, even though eventually this 
> might be changed during key commit)
>  
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this 
> block information is passed and writing to DB will happen via Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1303) add native acl support for OM operations

2019-03-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1303:
-
Target Version/s: 0.4.0

> add native acl support for OM operations
> 
>
> Key: HDDS-1303
> URL: https://issues.apache.org/jira/browse/HDDS-1303
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> add native acl support for OM operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14211) [Consistent Observer Reads] Allow for configurable "always msync" mode

2019-03-18 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795221#comment-16795221
 ] 

Erik Krogen commented on HDFS-14211:


Hey [~ayushtkn], thanks for your concerns, they are very valid. Regarding 
documentation, you bring up a great point. I am uploading a new v002 patch now 
which adds documentation of this feature to {{ObserverNameNode.md}}. Regarding 
explicit vs. implicit calls, any call to the Active will update 
{{lastMsyncTimeMs}}. See {{ObserverReadProxyProvider L415}}. This should cover 
the scenario you have described; please let me know if you see further issues.

> [Consistent Observer Reads] Allow for configurable "always msync" mode
> --
>
> Key: HDFS-14211
> URL: https://issues.apache.org/jira/browse/HDFS-14211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14211.000.patch, HDFS-14211.001.patch
>
>
> To allow for reads to be serviced from an ObserverNode (see HDFS-12943) in a 
> consistent way, an {{msync}} API was introduced (HDFS-13688) to allow for a 
> client to fetch the latest transaction ID from the Active NN, thereby 
> ensuring that subsequent reads from the ObserverNode will be up-to-date with 
> the current state of the Active.
> Using this properly, however, requires application-side changes: for 
> examples, a NodeManager should call {{msync}} before localizing the resources 
> for a client, since it received notification of the existence of those 
> resources via communicate which is out-of-band to HDFS and thus could 
> potentially attempt to localize them prior to the availability of those 
> resources on the ObserverNode.
> Until such application-side changes can be made, which will be a longer-term 
> effort, we need to provide a mechanism for unchanged clients to utilize the 
> ObserverNode without exposing such a client to inconsistencies. This is 
> essentially phase 3 of the roadmap outlined in the [design 
> document|https://issues.apache.org/jira/secure/attachment/12915990/ConsistentReadsFromStandbyNode.pdf]
>  for HDFS-12943.
> The design document proposes some heuristics based on understanding of how 
> common applications (e.g. MR) use HDFS for resources. As an initial pass, we 
> can simply have a flag which tells a client to call {{msync}} before _every 
> single_ read operation. This may seem counterintuitive, as it turns every 
> read operation into two RPCs: {{msync}} to the Active following by an actual 
> read operation to the Observer. However, the {{msync}} operation is extremely 
> lightweight, as it does not acquire the {{FSNamesystemLock}}, and in 
> experiments we have found that this approach can easily scale to well over 
> 100,000 {{msync}} operations per second on the Active (while still servicing 
> approx. 10,000 write op/s). Combined with the fast-path edit log tailing for 
> standby/observer nodes (HDFS-13150), this "always msync" approach should 
> introduce only a few ms of extra latency to each read call.
> Below are some experimental results collected from experiments which convert 
> a normal RPC workload into one in which all read operations are turned into 
> an {{msync}}. The baseline is a workload of 1.5k write op/s and 25k read op/s.
> ||Rate Multiplier|2|4|6|8||
> ||RPC Queue Avg Time (ms)|14|53|110|125||
> ||RPC Queue NumOps Avg (k)|51|102|147|177||
> ||RPC Queue NumOps Max (k)|148|269|306|312||
> _(numbers are approximate and should be viewed primarily for their trends)_
> Results are promising up to between 4x and 6x of the baseline workload, which 
> is approx. 100-150k read op/s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14211) [Consistent Observer Reads] Allow for configurable "always msync" mode

2019-03-18 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14211:
---
Attachment: HDFS-14211.002.patch

> [Consistent Observer Reads] Allow for configurable "always msync" mode
> --
>
> Key: HDFS-14211
> URL: https://issues.apache.org/jira/browse/HDFS-14211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14211.000.patch, HDFS-14211.001.patch, 
> HDFS-14211.002.patch
>
>
> To allow for reads to be serviced from an ObserverNode (see HDFS-12943) in a 
> consistent way, an {{msync}} API was introduced (HDFS-13688) to allow for a 
> client to fetch the latest transaction ID from the Active NN, thereby 
> ensuring that subsequent reads from the ObserverNode will be up-to-date with 
> the current state of the Active.
> Using this properly, however, requires application-side changes: for 
> examples, a NodeManager should call {{msync}} before localizing the resources 
> for a client, since it received notification of the existence of those 
> resources via communicate which is out-of-band to HDFS and thus could 
> potentially attempt to localize them prior to the availability of those 
> resources on the ObserverNode.
> Until such application-side changes can be made, which will be a longer-term 
> effort, we need to provide a mechanism for unchanged clients to utilize the 
> ObserverNode without exposing such a client to inconsistencies. This is 
> essentially phase 3 of the roadmap outlined in the [design 
> document|https://issues.apache.org/jira/secure/attachment/12915990/ConsistentReadsFromStandbyNode.pdf]
>  for HDFS-12943.
> The design document proposes some heuristics based on understanding of how 
> common applications (e.g. MR) use HDFS for resources. As an initial pass, we 
> can simply have a flag which tells a client to call {{msync}} before _every 
> single_ read operation. This may seem counterintuitive, as it turns every 
> read operation into two RPCs: {{msync}} to the Active following by an actual 
> read operation to the Observer. However, the {{msync}} operation is extremely 
> lightweight, as it does not acquire the {{FSNamesystemLock}}, and in 
> experiments we have found that this approach can easily scale to well over 
> 100,000 {{msync}} operations per second on the Active (while still servicing 
> approx. 10,000 write op/s). Combined with the fast-path edit log tailing for 
> standby/observer nodes (HDFS-13150), this "always msync" approach should 
> introduce only a few ms of extra latency to each read call.
> Below are some experimental results collected from experiments which convert 
> a normal RPC workload into one in which all read operations are turned into 
> an {{msync}}. The baseline is a workload of 1.5k write op/s and 25k read op/s.
> ||Rate Multiplier|2|4|6|8||
> ||RPC Queue Avg Time (ms)|14|53|110|125||
> ||RPC Queue NumOps Avg (k)|51|102|147|177||
> ||RPC Queue NumOps Max (k)|148|269|306|312||
> _(numbers are approximate and should be viewed primarily for their trends)_
> Results are promising up to between 4x and 6x of the baseline workload, which 
> is approx. 100-150k read op/s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.

2019-03-18 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1185:

Attachment: HDDS-1185.004.patch

> Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call 
> to OM.
> ---
>
> Key: HDDS-1185
> URL: https://issues.apache.org/jira/browse/HDDS-1185
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.5.0
>
> Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch, 
> HDDS-1185.003.patch, HDDS-1185.004.patch
>
>
> GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file 
> status for a given file. This can be optimized by performing all the 
> processing on the OzoneManager for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.

2019-03-18 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795226#comment-16795226
 ] 

Mukul Kumar Singh commented on HDDS-1185:
-

Thanks for the review [~jnp] and [~ljain]. I have uploaded a v4 patch.
This patch fixes the review comment and also adds a new unit test for this 
patch.

> Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call 
> to OM.
> ---
>
> Key: HDDS-1185
> URL: https://issues.apache.org/jira/browse/HDDS-1185
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.5.0
>
> Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch, 
> HDDS-1185.003.patch, HDDS-1185.004.patch
>
>
> GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file 
> status for a given file. This can be optimized by performing all the 
> processing on the OzoneManager for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14327) Support security for DNS resolving

2019-03-18 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795232#comment-16795232
 ] 

Fengnan Li commented on HDFS-14327:
---

[~elgoiri] Thanks for the review!

I am completely with you that this patch doesn't add much security features, 
and instead it is more of the flavor of making the DNS resolving compatible 
with the current Kerberos common setup. From that perspective renaming the Jira 
makes more sense.

I will create a separate test and fix the check styles and javadocs.

> Support security for DNS resolving
> --
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14327.001.patch
>
>
> The DNS resolving, clients will get the IP of the servers (NN/Routers) and 
> use the IP addresses to access the machine. This will fail in secure 
> environment as Kerberos is using the domain name in the principal so it won't 
> recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-03-18 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795233#comment-16795233
 ] 

CR Hota commented on HDFS-13972:


[~surendrasingh] [~ajisakaa] [~elgoiri] 

Thanks for the review. We should definitely support DT for create and do the 
change here. Will work on the change and submit a patch soon.

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch, 
> HDFS-13972-HDFS-13891.002.patch, HDFS-13972-HDFS-13891.003.patch, 
> HDFS-13972-HDFS-13891.004.patch, HDFS-13972-HDFS-13891.005.patch, 
> HDFS-13972-HDFS-13891.006.patch, HDFS-13972-HDFS-13891.007.patch, 
> HDFS-13972-HDFS-13891.008.patch, HDFS-13972-HDFS-13891.009.patch, 
> TestRouterWebHDFSContractTokens.java
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14327) Using FQDN instead of IP for DNS resolving

2019-03-18 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14327:
--
Summary: Using FQDN instead of IP for DNS resolving  (was: Support security 
for DNS resolving)

> Using FQDN instead of IP for DNS resolving
> --
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14327.001.patch
>
>
> The DNS resolving, clients will get the IP of the servers (NN/Routers) and 
> use the IP addresses to access the machine. This will fail in secure 
> environment as Kerberos is using the domain name in the principal so it won't 
> recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14327) Using FQDN instead of IP for DNS resolving

2019-03-18 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14327:
--
Description: 
With [HDFS-14118|https://issues.apache.org/jira/browse/HDFS-14118], clients can 
get the IP of the servers (NN/Routers) and use the IP addresses to access the 
machine. This will fail in secure environment as Kerberos is using the domain 
name  (FQDN) in the principal so it won't recognize the IP addresses.

This task is mainly adding a reverse look up on the current basis and get the 
domain name after the IP is fetched. After that clients will still use the 
domain name to access the servers.

  was:
The DNS resolving, clients will get the IP of the servers (NN/Routers) and use 
the IP addresses to access the machine. This will fail in secure environment as 
Kerberos is using the domain name in the principal so it won't recognize the IP 
addresses.

This task is mainly adding a reverse look up on the current basis and get the 
domain name after the IP is fetched. After that clients will still use the 
domain name to access the servers.


> Using FQDN instead of IP for DNS resolving
> --
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14327.001.patch
>
>
> With [HDFS-14118|https://issues.apache.org/jira/browse/HDFS-14118], clients 
> can get the IP of the servers (NN/Routers) and use the IP addresses to access 
> the machine. This will fail in secure environment as Kerberos is using the 
> domain name  (FQDN) in the principal so it won't recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14327) Using FQDN instead of IP to access servers with DNS resolving

2019-03-18 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14327:
--
Summary: Using FQDN instead of IP to access servers with DNS resolving  
(was: Using FQDN instead of IP for DNS resolving)

> Using FQDN instead of IP to access servers with DNS resolving
> -
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14327.001.patch
>
>
> With [HDFS-14118|https://issues.apache.org/jira/browse/HDFS-14118], clients 
> can get the IP of the servers (NN/Routers) and use the IP addresses to access 
> the machine. This will fail in secure environment as Kerberos is using the 
> domain name  (FQDN) in the principal so it won't recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1304) Ozone ha breaks service discovery

2019-03-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1304:
-
Affects Version/s: 0.4.0

> Ozone ha breaks service discovery
> -
>
> Key: HDDS-1304
> URL: https://issues.apache.org/jira/browse/HDDS-1304
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Nanda kumar
>Priority: Major
>
> Ozone ha breaks service discovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1304) Ozone ha breaks service discovery

2019-03-18 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-1304:


 Summary: Ozone ha breaks service discovery
 Key: HDDS-1304
 URL: https://issues.apache.org/jira/browse/HDDS-1304
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar
Assignee: Nanda kumar


Ozone ha breaks service discovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1304) Ozone ha breaks service discovery

2019-03-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1304:
-
Priority: Blocker  (was: Major)

> Ozone ha breaks service discovery
> -
>
> Key: HDDS-1304
> URL: https://issues.apache.org/jira/browse/HDDS-1304
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Nanda kumar
>Priority: Blocker
>
> Ozone ha breaks service discovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1302) Fix SCM CLI does not list container with id 1

2019-03-18 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1302:
--
Target Version/s: 0.4.0  (was: 0.5.0)

> Fix SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1302
> URL: https://issues.apache.org/jira/browse/HDDS-1302
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> In HDDS-1263 it is changed to handle the list containers with containerID 1 
> by changing the actual logic of listContainers in ScmContainerManager.java. 
> But now with this change, it is contradicting with the javadoc.
> From [~nandakumar131] comments
> https://issues.apache.org/jira/browse/HDDS-1263?focusedCommentId=16794865&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16794865
>  
> I agree this will be the way to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1302) Fix SCM CLI does not list container with id 1

2019-03-18 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795256#comment-16795256
 ] 

Nanda kumar commented on HDDS-1302:
---

Since HDDS-1263 is also merged to 0.4.0, this should also be targeted for 0.4.0 
or we have to revert HDDS-1263 from branch 0.4.0.

Since HDDS-1263 changes the behavior of list call, the behavior/output of ozone 
shell and OzoneFS will be inconsistent.

> Fix SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1302
> URL: https://issues.apache.org/jira/browse/HDDS-1302
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> In HDDS-1263 it is changed to handle the list containers with containerID 1 
> by changing the actual logic of listContainers in ScmContainerManager.java. 
> But now with this change, it is contradicting with the javadoc.
> From [~nandakumar131] comments
> https://issues.apache.org/jira/browse/HDDS-1263?focusedCommentId=16794865&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16794865
>  
> I agree this will be the way to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795269#comment-16795269
 ] 

Hadoop QA commented on HDDS-1250:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 58s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOMDbCheckpointServlet |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.ozone.web.TestOzoneV

[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214933&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214933
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 18/Mar/19 18:21
Start Date: 18/Mar/19 18:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #591: HDDS-1250: IIn OM 
HA AllocateBlock call where connecting to SCM from OM should not happen on 
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-474042051
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1002 | trunk passed |
   | +1 | compile | 90 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 90 | trunk passed |
   | +1 | shadedclient | 748 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 91 | trunk passed |
   | +1 | javadoc | 69 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | +1 | mvninstall | 91 | the patch passed |
   | +1 | compile | 88 | the patch passed |
   | +1 | cc | 88 | the patch passed |
   | +1 | javac | 88 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 78 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 106 | the patch passed |
   | +1 | javadoc | 63 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | +1 | unit | 34 | ozone-manager in the patch passed. |
   | -1 | unit | 1135 | integration-test in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 4571 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineUtils |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.web.client.TestVolume |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.ozShell.TestS3Shell |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.ozone.scm.TestSCMMXBean |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | 

[jira] [Commented] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795274#comment-16795274
 ] 

Hadoop QA commented on HDDS-1250:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 55s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | hadoop.ozone.om.TestOmBlockVersi

[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214931&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214931
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 18/Mar/19 18:21
Start Date: 18/Mar/19 18:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #591: HDDS-1250: IIn 
OM HA AllocateBlock call where connecting to SCM from OM should not happen on 
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-474041921
 
 
   Test failure's failing are failing due to 
   30 is not within min = 500 or max = 10
   
   This will be taken care in the below jira.
   https://issues.apache.org/jira/browse/HDDS-1297
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214931)
Time Spent: 3h 40m  (was: 3.5h)

> In OM HA AllocateBlock call where connecting to SCM from OM should not happen 
> on Ratis
> --
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on 
> all OM nodes, we make a call to SCM and write the allocateBlock information 
> into OM DB. The problem with this is, every OM allocateBlock and appends new 
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's 
> should have the same block information for a key, even though eventually this 
> might be changed during key commit)
>  
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this 
> block information is passed and writing to DB will happen via Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214921&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214921
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 18/Mar/19 18:16
Start Date: 18/Mar/19 18:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #591: HDDS-1250: IIn OM 
HA AllocateBlock call where connecting to SCM from OM should not happen on 
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-474040232
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1129 | trunk passed |
   | +1 | compile | 105 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 110 | trunk passed |
   | +1 | shadedclient | 858 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 100 | trunk passed |
   | +1 | javadoc | 72 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 104 | the patch passed |
   | +1 | compile | 100 | the patch passed |
   | +1 | cc | 100 | the patch passed |
   | +1 | javac | 100 | the patch passed |
   | -0 | checkstyle | 24 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 87 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 795 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 113 | the patch passed |
   | +1 | javadoc | 69 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 38 | common in the patch passed. |
   | +1 | unit | 44 | ozone-manager in the patch passed. |
   | -1 | unit | 418 | integration-test in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 4367 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.scm.TestSCMMXBean |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.ozShell.TestS3Shell |
   |   | hadoop.ozone.web.client.TestVolume |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.ozone.om.TestContainerReportWithK

[jira] [Commented] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.

2019-03-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795278#comment-16795278
 ] 

Hadoop QA commented on HDDS-1185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-ozone generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 59s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 13s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.node.TestSCMNodeManager |
|   | hadoop.ozone.om.TestOMDbCheckpointServlet |
|   | ha

[jira] [Updated] (HDDS-1304) Ozone ha breaks service discovery

2019-03-18 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1304:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: HDDS-4)

> Ozone ha breaks service discovery
> -
>
> Key: HDDS-1304
> URL: https://issues.apache.org/jira/browse/HDDS-1304
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Nanda kumar
>Priority: Blocker
>
> Ozone ha breaks service discovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1302) Fix SCM CLI does not list container with id 1

2019-03-18 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795293#comment-16795293
 ] 

Bharat Viswanadham commented on HDDS-1302:
--

[~nandakumar131]

Sorry I have not understood how this change will affect OzoneFS(I might be 
missing something here). Could you provide more info this.

> Fix SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1302
> URL: https://issues.apache.org/jira/browse/HDDS-1302
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> In HDDS-1263 it is changed to handle the list containers with containerID 1 
> by changing the actual logic of listContainers in ScmContainerManager.java. 
> But now with this change, it is contradicting with the javadoc.
> From [~nandakumar131] comments
> https://issues.apache.org/jira/browse/HDDS-1263?focusedCommentId=16794865&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16794865
>  
> I agree this will be the way to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214949&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214949
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 18:56
Start Date: 18/Mar/19 18:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266593830
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214949)
Time Spent: 3.5h  (was: 3h 20m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214950&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214950
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 18:57
Start Date: 18/Mar/19 18:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266594270
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
+
+  /**
+   * Cleanup action's need to be done, once this rule is satisfied.
+   */
+  public abstract void cleanup();
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214950)
Time Spent: 3h 40m  (was: 3.5h)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214970&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214970
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 19:02
Start Date: 18/Mar/19 19:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266596289
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
+
+  /**
+   * Actual processing logic for this rule.
+   * @param report
+   */
+  public abstract void process(T report);
+
+  /**
+   * Cleanup action's need to be done, once this rule is satisfied.
+   */
+  public abstract void cleanup();
+
+  @Override
+  public final void onMessage(T report, EventPublisher publisher) {
+
+// TODO: when we have remove handlers, we can remove getInChillmode check
+
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+return;
+  }
+
+  process(report);
 
-  boolean validate();
+  if (validate()) {
+cleanup();
+chillModeManager.validateChillModeExitRules(ruleName, publisher);
+  }
+}
+  }
 
-  void process(T report);
+  /**
+   * Return SCMChillModeManager.
+   * @return SCMChillModeManager
+   */
+  public SCMChillModeManager getChillModeManager() {
+return chillModeManager;
+  }
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214970)
Time Spent: 3h 50m  (was: 3h 40m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214947&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214947
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 18:54
Start Date: 18/Mar/19 18:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#discussion_r266593151
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
 ##
 @@ -17,16 +17,87 @@
  */
 package org.apache.hadoop.hdds.scm.chillmode;
 
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+
 /**
- * Interface for defining chill mode exit rules.
+ * Abstract class for ChillModeExitRules. When a new rule is added, the new
+ * rule should extend this abstract class.
+ *
+ * Each rule Should do:
+ * 1. Should add a handler for the event it is looking for during the
+ * initialization of the rule.
+ * 2. Add the rule in ScmChillModeManager to list of the rules.
+ *
  *
  * @param 
  */
-public interface ChillModeExitRule {
+public abstract class ChillModeExitRule implements EventHandler {
+
+  private final SCMChillModeManager chillModeManager;
+  private final String ruleName;
+
+  public ChillModeExitRule(SCMChillModeManager chillModeManager,
+  String ruleName) {
+this.chillModeManager = chillModeManager;
+this.ruleName = ruleName;
+  }
+
+  /**
+   * Return's the name of this ChillModeExit Rule.
+   * @return ruleName
+   */
+  public String getRuleName() {
+return ruleName;
+  }
+
+
+  /**
+   * Validate's this rule. If this rule condition is met, returns true, else
+   * returns false.
+   * @return boolean
+   */
+  public abstract boolean validate();
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214947)
Time Spent: 3h 20m  (was: 3h 10m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1305) Robot test containers hadoop client can't access o3fs

2019-03-18 Thread Sandeep Nemuri (JIRA)
Sandeep Nemuri created HDDS-1305:


 Summary: Robot test containers hadoop client can't access o3fs
 Key: HDDS-1305
 URL: https://issues.apache.org/jira/browse/HDDS-1305
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Sandeep Nemuri
 Attachments: run.log

Run the robot test using:
{code:java}
./test.sh --keep --env ozonefs
{code}

login to OM container and check if we have desired volume/bucket/key got 
created with robot tests.
{code:java}
[root@o3new ~]$ docker exec -it ozonefs_om_1 /bin/bash
bash-4.2$ ozone fs -ls o3fs://bucket1.fstest/
Found 3 items
-rw-rw-rw-   1 hadoop hadoop  22990 2019-03-15 17:28 
o3fs://bucket1.fstest/KEY.txt
drwxrwxrwx   - hadoop hadoop  0 1970-01-01 00:00 
o3fs://bucket1.fstest/testdir
drwxrwxrwx   - hadoop hadoop  0 2019-03-15 17:27 
o3fs://bucket1.fstest/testdir1
{code}
{code:java}
[root@o3new ~]$ docker exec -it ozonefs_hadoop3_1 /bin/bash
bash-4.4$ hadoop classpath
/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
bash-4.4$ hadoop fs -ls o3fs://bucket1.fstest/
2019-03-18 19:12:42 INFO  Configuration:3204 - Removed undeclared tags:
2019-03-18 19:12:42 ERROR OzoneClientFactory:294 - Couldn't create protocol 
class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:127)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:189)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:249)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:232)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
Caused by: java.lang.VerifyError: Cannot inherit from final class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.(OzoneManagerProtocolClientSideTranslatorPB.java:169)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:142)
... 23 more
ls: Couldn't create protocol class org.apache.hadoop.ozone.client.rpc.RpcClient
2019-03-18 19:12:42 INFO  Configuration:3204 - Removed undeclared tags:
bash-4.4$
{c

[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214977&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214977
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 19:25
Start Date: 18/Mar/19 19:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #558: HDDS-1217. 
Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#issuecomment-474065388
 
 
   Thank You @nandakumar131  for the review.
   I have addressed the review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214977)
Time Spent: 4h  (was: 3h 50m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=214978&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214978
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 18/Mar/19 19:26
Start Date: 18/Mar/19 19:26
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #558: HDDS-1217. 
Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558#issuecomment-474065559
 
 
   > **ChillModeExitRule**
   > We can mandate the event handler adding logic of each and every 
ChillModeExitRule implementation by introducing an abstract `abstract 
TypedEvent getEventType()` method in `ChillModeExitRule` class. Then we can 
make a call to this method from `ChillModeExitRule` constructor. This will 
mandate each and every rule to provide it's EventType and we can add its event 
handler in `ChillModeExitRule`.
   > 
   > ```
   > public ChillModeExitRule(SCMChillModeManager chillModeManager,
   > String ruleName, EventQueue eventQueue) {
   >   this.chillModeManager = chillModeManager;
   >   this.ruleName = ruleName;
   >   eventQueue.addHandler(getEventType(), this);
   > }
   > ```
   
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214978)
Time Spent: 4h 10m  (was: 4h)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-875) Use apache hadoop docker image for the ozonefs cluster definition

2019-03-18 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795332#comment-16795332
 ] 

Sandeep Nemuri commented on HDDS-875:
-

Looks like ozonefs+hdfs integration is broken with current setup. Created 
HDDS-1305.
Once it is fixed will update the image and add robot tests. 

> Use apache hadoop docker image for the ozonefs cluster definition
> -
>
> Key: HDDS-875
> URL: https://issues.apache.org/jira/browse/HDDS-875
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>
> In HDDS-223 we switched from the external flokkr/hadoop image to use the 
> apache/hadoop images for the acceptance test of ozone.
> As [~msingh] pointed to me the compose/ozonefs folder still use flokkr/hadoop 
> image.
> It could be easy to switch to the latest apache hadoop image.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >