[jira] [Commented] (HDDS-1753) Datanode unable to find chunk while replication data using ratis.

2019-08-16 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908895#comment-16908895
 ] 

Shashikant Banerjee commented on HDDS-1753:
---

Uploaded patch v0 to address the issue. The patch is rebased on top of 
HDDS-1610.

> Datanode unable to find chunk while replication data using ratis.
> -
>
> Key: HDDS-1753
> URL: https://issues.apache.org/jira/browse/HDDS-1753
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster
> Attachments: HDDS-1753.000.patch
>
>
> Leader datanode is unable to read chunk from the datanode while replicating 
> data from leader to follower.
> Please note that deletion of keys is also happening while the data is being 
> replicated.
> {code}
> 2019-07-02 19:39:22,604 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#70:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 ERROR impl.ChunkManagerImpl 
> (ChunkUtils.java:readData(161)) - Unable to find the chunk file. chunk info : 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3
> -4d64-93d8-fa2ebafee933_chunk_1, offset=0, len=2048}
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot 
> (9770) already h
> as the append entries (first index: 1)
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#71:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: ReadChunk : Trace 
> ID: 4216d461a4679e17:4216d461a4679e17:0:0 : Message: Unable to find the c
> hunk file. chunk info 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1,
>  offset=0, len=2048} : Result: UNABLE_TO_FIND_CHUNK
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot 
> (9770) already h
> as the append entries (first index: 2)
> 2019-07-02 19:39:22,606 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#72:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 19:39:22.606 [pool-195-thread-19] ERROR DNAudit - user=null | ip=null | 
> op=READ_CHUNK {blockData=conID: 3 locID: 102372189549953034 bcsId: 0} | 
> ret=FAILURE
> java.lang.Exception: Unable to find the chunk file. chunk info 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1,
>  offset=0, len=2048}
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:320)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:346)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:476)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$getCachedStateMachineData$2(ContainerStateMachine.java:495)
>  ~[hadoop-hdds-container-service-0.5.0-SN
> APSHOT.jar:?]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
>  ~[guava-11.0.2.jar:?]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>  ~[guava-11.0.2.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.loadSy

[jira] [Updated] (HDDS-1753) Datanode unable to find chunk while replication data using ratis.

2019-08-16 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1753:
--
Attachment: HDDS-1753.000.patch

> Datanode unable to find chunk while replication data using ratis.
> -
>
> Key: HDDS-1753
> URL: https://issues.apache.org/jira/browse/HDDS-1753
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster
> Attachments: HDDS-1753.000.patch
>
>
> Leader datanode is unable to read chunk from the datanode while replicating 
> data from leader to follower.
> Please note that deletion of keys is also happening while the data is being 
> replicated.
> {code}
> 2019-07-02 19:39:22,604 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#70:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 ERROR impl.ChunkManagerImpl 
> (ChunkUtils.java:readData(161)) - Unable to find the chunk file. chunk info : 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3
> -4d64-93d8-fa2ebafee933_chunk_1, offset=0, len=2048}
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot 
> (9770) already h
> as the append entries (first index: 1)
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#71:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: ReadChunk : Trace 
> ID: 4216d461a4679e17:4216d461a4679e17:0:0 : Message: Unable to find the c
> hunk file. chunk info 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1,
>  offset=0, len=2048} : Result: UNABLE_TO_FIND_CHUNK
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot 
> (9770) already h
> as the append entries (first index: 2)
> 2019-07-02 19:39:22,606 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#72:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 19:39:22.606 [pool-195-thread-19] ERROR DNAudit - user=null | ip=null | 
> op=READ_CHUNK {blockData=conID: 3 locID: 102372189549953034 bcsId: 0} | 
> ret=FAILURE
> java.lang.Exception: Unable to find the chunk file. chunk info 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1,
>  offset=0, len=2048}
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:320)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:346)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:476)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$getCachedStateMachineData$2(ContainerStateMachine.java:495)
>  ~[hadoop-hdds-container-service-0.5.0-SN
> APSHOT.jar:?]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
>  ~[guava-11.0.2.jar:?]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>  ~[guava-11.0.2.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) 
> ~[guava-11.0.2.jar:?]
> at 
> com.google.common.cache.LocalCache$Segm

[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908892#comment-16908892
 ] 

xuzq commented on HDFS-14739:
-

Thanks [~hemanthboyina].

In the directory tree structure, /mnt/test1 should be the child of /mnt, so I 
think they are related.

And the mount points /test1 with owner (test1) is unrelated with the mount 
points /mnt with owner (mnt), but it returned when Ls of files in /mnt.

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
> Attachments: image-2019-08-16-17-15-50-614.png, 
> image-2019-08-16-17-16-00-863.png, image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908883#comment-16908883
 ] 

hemanthboyina commented on HDFS-14739:
--

as far as i know both are unrelated 
the mount points you have  created (/mnt/test1) with owner(_mnt_test1)_ and ls 
of files in /mnt 


[~elgoiri] is this issue a valid one ?

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
> Attachments: image-2019-08-16-17-15-50-614.png, 
> image-2019-08-16-17-16-00-863.png, image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908847#comment-16908847
 ] 

xuzq edited comment on HDFS-14739 at 8/16/19 9:16 AM:
--

I'm sorry, what we said may be different thing.:(

First, while mount point creation,we get the owner from 
{code:java}
UserGroupInformation ugi = NameNode.getRemoteUser();
record.setOwnerName(ugi.getShortUserName());
{code}
then it will be overwrite.
{code:java}
// Set ACL info for mount table entry
if (aclInfo.getOwner() != null) {
  newEntry.setOwnerName(aclInfo.getOwner());
}

if (aclInfo.getGroup() != null) {
  newEntry.setGroupName(aclInfo.getGroup());
}

if (aclInfo.getMode() != null) {
  newEntry.setMode(aclInfo.getMode());
}
{code}
Second, my mount table like blew:

!image-2019-08-16-17-16-00-863.png|width=563,height=50!

!image-2019-08-16-17-16-34-325.png|width=552,height=56!

The FsShell LS result does not match my expect, my expect result should be like:

drwxr-xr-x   - *mnt_test1  mnt_test1_group*  0  2019-08-15 19:07 /mnt/test1

 

Your expect result is what? and why? [~hemanthboyina] :)

 


was (Author: xuzq_zander):
I'm sorry, what we said may be different thing.:(

First, while mount point creation,we get the owner from 
{code:java}
UserGroupInformation ugi = NameNode.getRemoteUser();
record.setOwnerName(ugi.getShortUserName());
{code}
then it will be overwrite.
{code:java}
// Set ACL info for mount table entry
if (aclInfo.getOwner() != null) {
  newEntry.setOwnerName(aclInfo.getOwner());
}

if (aclInfo.getGroup() != null) {
  newEntry.setGroupName(aclInfo.getGroup());
}

if (aclInfo.getMode() != null) {
  newEntry.setMode(aclInfo.getMode());
}
{code}
Second, my mount table like blew:

!image-2019-08-16-16-27-08-003.png|width=847,height=63!

!image-2019-08-16-16-28-15-022.png|width=498,height=47!

The LS result does not match my expect, my expect result should be like:

drwxr-xr-x   - *mnt_test1  mnt_test1_group*  0  2019-08-15 19:07 /mnt/test1

 

Your expect result is what? and why? [~hemanthboyina] :)

 

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
> Attachments: image-2019-08-16-17-15-50-614.png, 
> image-2019-08-16-17-16-00-863.png, image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908876#comment-16908876
 ] 

xuzq commented on HDFS-14739:
-

I'm sorry,  maybe I didn't express it clearly.

First, I add some mount table use the command like:

_./bin/hdfs dfsrouteradmin -add /mnt ns0 /mnt -owner mnt -group mnt_group -mode 
755_

 _./bin/hdfs dfsrouteradmin -add /mnt/test1 ns0 /mnt/test1 -owner mnt_test1 
-group mnt_test1_group -mode 755_

 _./bin/hdfs dfsrouteradmin -add /test1 ns1 /test1 -owner test1 -group test1 
-mode 755_

 

Second, use hadoop fs shell to LS (in order to do getListing rpc), like:

_./bin/hadoop fs -ls /mnt_

 

 

And the result of FsShell Ls does not match my expect.

 

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14739:

Attachment: (was: image-2019-08-16-16-28-15-022.png)

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14739:

Attachment: (was: image-2019-08-16-16-27-08-003.png)

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14452) Make Op#valueOf() Public

2019-08-16 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908832#comment-16908832
 ] 

Surendra Singh Lilhore edited comment on HDFS-14452 at 8/16/19 8:52 AM:


Hi [~belugabehr],

InterfaceAudience for {{OP}} is {{private}}, means it is used only in hadoop. 
Can you tell me your scenario ?
{code:java}
/** Operation */
@InterfaceAudience.Private
@InterfaceStability.Evolving
public enum Op {
  WRITE_BLOCK((byte)80),{code}


was (Author: surendrasingh):
LGTM, +1

> Make Op#valueOf() Public
> 
>
> Key: HDFS-14452
> URL: https://issues.apache.org/jira/browse/HDFS-14452
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: hemanthboyina
>Priority: Minor
>  Labels: noob
> Attachments: HDFS-14452.patch
>
>
> Change signature of {{private static Op valueOf(byte code)}} to be public.  
> Right now, the only easy way to look up in Op is to pass in a {{DataInput}} 
> object, which is not all that flexible and efficient for other custom 
> implementations that want to store the Op code a different way.
> https://github.com/apache/hadoop/blob/8c95cb9d6bef369fef6a8364f0c0764eba90e44a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java#L53



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296158&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296158
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 16/Aug/19 08:50
Start Date: 16/Aug/19 08:50
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#discussion_r314611584
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-compose.yaml
 ##
 @@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+   s3-proxy:
 
 Review comment:
   ```suggestion
  s3g:
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296158)
Time Spent: 1h  (was: 50m)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files where we start 3 s3 
> gateway servers, and ha-proxy is used to load balance these S3 Gateway 
> Servers.
>  
> In this Jira, all are proxy configurations are hardcoded, we can make 
> improvements to scale and automatically configure with environment variables 
> as a future improvement. This is just a starter example.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296157&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296157
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 16/Aug/19 08:50
Start Date: 16/Aug/19 08:50
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#discussion_r314612064
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozones3-haproxy/haproxy-conf/haproxy.cfg
 ##
 @@ -0,0 +1,22 @@
+# Simple configuration for an HTTP proxy listening on port 5001 on all
 
 Review comment:
   ```suggestion
   # Simple configuration for an HTTP proxy listening on port 9878 on all
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296157)
Time Spent: 50m  (was: 40m)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files where we start 3 s3 
> gateway servers, and ha-proxy is used to load balance these S3 Gateway 
> Servers.
>  
> In this Jira, all are proxy configurations are hardcoded, we can make 
> improvements to scale and automatically configure with environment variables 
> as a future improvement. This is just a starter example.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296156&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296156
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 16/Aug/19 08:50
Start Date: 16/Aug/19 08:50
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#discussion_r314611833
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-compose.yaml
 ##
 @@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+   s3-proxy:
+  image: haproxy:latest
+  volumes:
+ - ../..:/opt/hadoop
+ - ./haproxy-conf/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
+  ports:
+ - 8081:5001
 
 Review comment:
   ```suggestion
- 8081:9878
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296156)
Time Spent: 40m  (was: 0.5h)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files where we start 3 s3 
> gateway servers, and ha-proxy is used to load balance these S3 Gateway 
> Servers.
>  
> In this Jira, all are proxy configurations are hardcoded, we can make 
> improvements to scale and automatically configure with environment variables 
> as a future improvement. This is just a starter example.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296155&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296155
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 16/Aug/19 08:50
Start Date: 16/Aug/19 08:50
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#discussion_r314612036
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozones3-haproxy/haproxy-conf/haproxy.cfg
 ##
 @@ -0,0 +1,22 @@
+# Simple configuration for an HTTP proxy listening on port 5001 on all
+# interfaces and forwarding requests to a multiple multiple S3 servers in round
+# robin fashion.
+global
+daemon
+maxconn 256
+
+defaults
+mode http
+timeout connect 5000ms
+timeout client 5ms
+timeout server 5ms
+
+frontend http-in
+bind *:5001
 
 Review comment:
   ```suggestion
   bind *:9878
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296155)
Time Spent: 0.5h  (was: 20m)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files where we start 3 s3 
> gateway servers, and ha-proxy is used to load balance these S3 Gateway 
> Servers.
>  
> In this Jira, all are proxy configurations are hardcoded, we can make 
> improvements to scale and automatically configure with environment variables 
> as a future improvement. This is just a starter example.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908854#comment-16908854
 ] 

hemanthboyina commented on HDFS-14739:
--

    _then it will be overwrite_

It will overwrite if you explicitly specify the group while mount point 
creation , so thats not our scenario .

For listing mount points the command is dfsrouteradmin -ls 
what you have been checking is for listing files in /mnt 

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
> Attachments: image-2019-08-16-16-27-08-003.png, 
> image-2019-08-16-16-28-15-022.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908847#comment-16908847
 ] 

xuzq commented on HDFS-14739:
-

I'm sorry, what we said may be different thing.:(

First, while mount point creation,we get the owner from 
{code:java}
UserGroupInformation ugi = NameNode.getRemoteUser();
record.setOwnerName(ugi.getShortUserName());
{code}
then it will be overwrite.
{code:java}
// Set ACL info for mount table entry
if (aclInfo.getOwner() != null) {
  newEntry.setOwnerName(aclInfo.getOwner());
}

if (aclInfo.getGroup() != null) {
  newEntry.setGroupName(aclInfo.getGroup());
}

if (aclInfo.getMode() != null) {
  newEntry.setMode(aclInfo.getMode());
}
{code}
Second, my mount table like blew:

!image-2019-08-16-16-27-08-003.png|width=847,height=63!

!image-2019-08-16-16-28-15-022.png|width=498,height=47!

The LS result does not match my expect, my expect result should be like:

drwxr-xr-x   - *mnt_test1  mnt_test1_group*  0  2019-08-15 19:07 /mnt/test1

 

Your expect result is what? and why? [~hemanthboyina] :)

 

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
> Attachments: image-2019-08-16-16-27-08-003.png, 
> image-2019-08-16-16-28-15-022.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14452) Make Op#valueOf() Public

2019-08-16 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14452:
--
Summary: Make Op#valueOf() Public  (was: Make Op valueOf Public)

> Make Op#valueOf() Public
> 
>
> Key: HDFS-14452
> URL: https://issues.apache.org/jira/browse/HDFS-14452
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: hemanthboyina
>Priority: Minor
>  Labels: noob
> Attachments: HDFS-14452.patch
>
>
> Change signature of {{private static Op valueOf(byte code)}} to be public.  
> Right now, the only easy way to look up in Op is to pass in a {{DataInput}} 
> object, which is not all that flexible and efficient for other custom 
> implementations that want to store the Op code a different way.
> https://github.com/apache/hadoop/blob/8c95cb9d6bef369fef6a8364f0c0764eba90e44a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java#L53



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14739:

Attachment: image-2019-08-16-16-28-15-022.png

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
> Attachments: image-2019-08-16-16-27-08-003.png, 
> image-2019-08-16-16-28-15-022.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-16 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14739:

Attachment: image-2019-08-16-16-27-08-003.png

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
> Attachments: image-2019-08-16-16-27-08-003.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14452) Make Op valueOf Public

2019-08-16 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908832#comment-16908832
 ] 

Surendra Singh Lilhore commented on HDFS-14452:
---

LGTM, +1

> Make Op valueOf Public
> --
>
> Key: HDFS-14452
> URL: https://issues.apache.org/jira/browse/HDFS-14452
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: hemanthboyina
>Priority: Minor
>  Labels: noob
> Attachments: HDFS-14452.patch
>
>
> Change signature of {{private static Op valueOf(byte code)}} to be public.  
> Right now, the only easy way to look up in Op is to pass in a {{DataInput}} 
> object, which is not all that flexible and efficient for other custom 
> implementations that want to store the Op code a different way.
> https://github.com/apache/hadoop/blob/8c95cb9d6bef369fef6a8364f0c0764eba90e44a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java#L53



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14740) HDFS read cache persistence support

2019-08-16 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908828#comment-16908828
 ] 

Feilong He commented on HDFS-14740:
---

An initial patch has been uploaded, contributed by Mo Rui.

> HDFS read cache persistence support
> ---
>
> Key: HDFS-14740
> URL: https://issues.apache.org/jira/browse/HDFS-14740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14740.000.patch
>
>
> In HDFS-13762, persistent memory is enabled in HDFS centralized cache 
> management. Even though persistent memory can persist cache data, for 
> simplifying the implementation, the previous cache data will be cleaned up 
> during DataNode restarts. We propose to improve HDFS persistent memory (PM) 
> cache by taking advantage of PM's data persistence characteristic, i.e., 
> recovering the cache status when DataNode restarts, thus, cache warm up time 
> can be saved for user.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14740) HDFS read cache persistence support

2019-08-16 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-14740:
--
Attachment: HDFS-14740.000.patch
Status: Patch Available  (was: Open)

> HDFS read cache persistence support
> ---
>
> Key: HDFS-14740
> URL: https://issues.apache.org/jira/browse/HDFS-14740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14740.000.patch
>
>
> In HDFS-13762, persistent memory is enabled in HDFS centralized cache 
> management. Even though persistent memory can persist cache data, for 
> simplifying the implementation, the previous cache data will be cleaned up 
> during DataNode restarts. We propose to improve HDFS persistent memory (PM) 
> cache by taking advantage of PM's data persistence characteristic, i.e., 
> recovering the cache status when DataNode restarts, thus, cache warm up time 
> can be saved for user.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-16 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908817#comment-16908817
 ] 

Siddharth Wagle commented on HDFS-2470:
---

- Agree with point #1, will ignore root directory permissions, [~arp] you ok 
with that since you +1ed earlier.
- Regarding point #2, I did not want the generic StorageDirectory class to 
define a default, hence defined the default for NN and JN separately, and since 
unit tests don't care about setting permissions, went with a null check to skip 
permission setting instead. But, do you mean we should change all unit-test 
call sites to pass a default 700 permission? That would be ok I guess.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908814#comment-16908814
 ] 

Hudson commented on HDDS-1894:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17135 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17135/])
HDDS-1894. Add filter to scmcli listPipelines. (#1286) (sammichen: rev 
bf3751521b51f8de25c12d6366e3fc535106cbb3)
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java


> Support listPipelines by filters in scmcli
> --
>
> Key: HDDS-1894
> URL: https://issues.apache.org/jira/browse/HDDS-1894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Today scmcli has a subcmd that allow list all pipelines. This ticket is 
> opened to filter the results by switches, e.g., filter by Factor: THREE and 
> State: OPEN. This will be useful for trouble shooting in large cluster.
>  
> {code}
> bin/ozone scmcli listPipelines
> Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
> cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
> ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:ONE, State:OPEN]
> Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
> 0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
> ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
> ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
> ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:THREE, State:OPEN]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-16 Thread Tatyana Alexeyev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908812#comment-16908812
 ] 

Tatyana Alexeyev commented on HDFS-14735:
-

 There are some error in datanode log file:

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
(BP-56322450-10.6.14.101-1565836229790 heartbeating to 
ip-10-6-14-101.us-east-2.compute.internal/10.6.14.101:8020): 
DatanodeRegistration(10.6.13.226:50010, 
datanodeUuid=4a8a4e5b-604d-4a8d-96b7-246ccf4d9baf, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-39f69814-6f95-4195-8272-37b6d2166de4;nsid=950102054;c=1565836229790)
 Starting thread to transfer 
BP-56322450-10.6.14.101-1565836229790:blk_1073973783_232959 to 10.6.14.73:50010 
10.6.13.248:50010

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
(BP-56322450-10.6.14.101-1565836229790 heartbeating to 
ip-10-6-14-101.us-east-2.compute.internal/10.6.14.101:8020): 
DatanodeRegistration(10.6.13.226:50010, 
datanodeUuid=4a8a4e5b-604d-4a8d-96b7-246ccf4d9baf, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-39f69814-6f95-4195-8272-37b6d2166de4;nsid=950102054;c=1565836229790)
 Starting thread to transfer 
BP-56322450-10.6.14.101-1565836229790:blk_1073973798_232974 to 
10.6.13.248:50010 10.6.14.73:50010

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
([org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@32ac48b6|mailto:org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@32ac48b6]):
 DataTransfer, at ip-10-6-13-226.us-east-2.compute.internal:50010: Transmitted 
BP-56322450-10.6.14.101-1565836229790:blk_1073973727_232903 (numBytes=60530) to 
/10.6.13.248:50010

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
([org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@57370a18|mailto:org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@57370a18]):
 DataTransfer, at ip-10-6-13-226.us-east-2.compute.internal:50010: Transmitted 
BP-56322450-10.6.14.101-1565836229790:blk_1073973701_232877 (numBytes=36427) to 
/10.6.14.73:50010

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
([org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@766e0d79|mailto:org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@766e0d79]):
 DataTransfer, at ip-10-6-13-226.us-east-2.compute.internal:50010: Transmitted 
BP-56322450-10.6.14.101-1565836229790:blk_1073973741_232917 (numBytes=36427) to 
/10.6.14.73:50010

2019-08-16 03:52:05,697 WARN org.apache.hadoop.hdfs.server.datanode.DataNode 
(BP-56322450-10.6.14.101-1565836229790 heartbeating to 
ip-10-6-14-101.us-east-2.compute.internal/10.6.14.101:8020): *Can't replicate 
block BP-56322450-10.6.14.101-1565836229790:blk_1073973750_232926 because 
on-disk length 402868 is shorter than NameNode recorded length 
9223372036854775807*

 ** 

There are some messages related to the replication in the Namenode log:

 

2019-08-16 04:00:00,141 INFO org.apache.hadoop.hdfs.StateChange (IPC Server 
handler 30 on 8020): DIR* completeFile: 
/tmp/hadoop-yarn/staging/sphdadm/.staging/job_1565836275738_5267/job.xml is 
closed by DFSClient_NONMAPREDUCE_-1718547537_1

2019-08-16 04:00:00,152 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory 
(IPC Server handler 1 on 8020): *Increasing replication from 1 to 4 for 
/tmp/hadoop-yarn/staging/sphdadm/.staging/job_1565836275738_5268/libjars/parquet-encoding-1.6.0.jar*

 ** 

 

> File could only be replicated to 0 nodes instead of minReplication (=1)
> ---
>
> Key: HDFS-14735
> URL: https://issues.apache.org/jira/browse/HDFS-14735
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tatyana Alexeyev
>Priority: Major
>
> Hello I have intermitent error when running my EMR Hadoop Cluster:
> "Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
> bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
>  could only be replicated to 0 nodes instead of minReplication (=1). There 
> are 5 datanode(s) running and no node(s) are excluded in this operation."
> I am running Hadoop version 
> sphdadm@ip-10-6-15-108 hadoop]$ hadoop version
> Hadoop 2.8.5-amzn-4
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14740) HDFS read cache persistence support

2019-08-16 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He reassigned HDFS-14740:
-

Assignee: Feilong He

> HDFS read cache persistence support
> ---
>
> Key: HDFS-14740
> URL: https://issues.apache.org/jira/browse/HDFS-14740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
>
> In HDFS-13762, persistent memory is enabled in HDFS centralized cache 
> management. Even though persistent memory can persist cache data, for 
> simplifying the implementation, the previous cache data will be cleaned up 
> during DataNode restarts. We propose to improve HDFS persistent memory (PM) 
> cache by taking advantage of PM's data persistence characteristic, i.e., 
> recovering the cache status when DataNode restarts, thus, cache warm up time 
> can be saved for user.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-16 Thread Li Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng resolved HDDS-1894.

   Resolution: Fixed
Fix Version/s: 0.4.1
 Release Note: https://github.com/apache/hadoop/pull/1286

> Support listPipelines by filters in scmcli
> --
>
> Key: HDDS-1894
> URL: https://issues.apache.org/jira/browse/HDDS-1894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Today scmcli has a subcmd that allow list all pipelines. This ticket is 
> opened to filter the results by switches, e.g., filter by Factor: THREE and 
> State: OPEN. This will be useful for trouble shooting in large cluster.
>  
> {code}
> bin/ozone scmcli listPipelines
> Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
> cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
> ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:ONE, State:OPEN]
> Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
> 0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
> ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
> ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
> ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:THREE, State:OPEN]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3