[jira] [Work started] (HDDS-2152) Ozone client fails with OOM while writing a large (~300MB) key.

2019-09-22 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2152 started by YiSheng Lien.
--
> Ozone client fails with OOM while writing a large (~300MB) key.
> ---
>
> Key: HDDS-2152
> URL: https://issues.apache.org/jira/browse/HDDS-2152
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: YiSheng Lien
>Priority: Major
>
> {code}
> dd if=/dev/zero of=testfile bs=1024 count=307200
> ozone sh key put /vol1/bucket1/key testfile
> {code}
> {code}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at 
> java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) at 
> java.nio.ByteBuffer.allocate(ByteBuffer.java:335) at 
> org.apache.hadoop.hdds.scm.storage.BufferPool.allocateBufferIfNeeded(BufferPool.java:66)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:234)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:129)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:211)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:193)
>  at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>  at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:96) at 
> org.apache.hadoop.ozone.web.ozShell.keys.PutKeyHandler.call(PutKeyHandler.java:117)
>  at 
> org.apache.hadoop.ozone.web.ozShell.keys.PutKeyHandler.call(PutKeyHandler.java:55)
>  at picocli.CommandLine.execute(CommandLine.java:1173) at 
> picocli.CommandLine.access$800(CommandLine.java:141)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1738) Add nullable annotation for OMResponse classes

2019-09-22 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-1738:
---
Status: Patch Available  (was: In Progress)

> Add nullable annotation for OMResponse classes
> --
>
> Key: HDDS-1738
> URL: https://issues.apache.org/jira/browse/HDDS-1738
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is to address [~arp] comment.
> A future improvement unrelated to your patch - replace null with Optional. Or 
> at least add [@nullable|https://github.com/nullable] annotation on the 
> parameter.
> Add @nullable for all OMResponse fields, where fields can be null.
>  
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1738) Add nullable annotation for OMResponse classes

2019-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1738:
-
Labels: newbie pull-request-available  (was: newbie)

> Add nullable annotation for OMResponse classes
> --
>
> Key: HDDS-1738
> URL: https://issues.apache.org/jira/browse/HDDS-1738
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>
> This is to address [~arp] comment.
> A future improvement unrelated to your patch - replace null with Optional. Or 
> at least add [@nullable|https://github.com/nullable] annotation on the 
> parameter.
> Add @nullable for all OMResponse fields, where fields can be null.
>  
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1738) Add nullable annotation for OMResponse classes

2019-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1738?focusedWorklogId=316397=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316397
 ]

ASF GitHub Bot logged work on HDDS-1738:


Author: ASF GitHub Bot
Created on: 23/Sep/19 04:47
Start Date: 23/Sep/19 04:47
Worklog Time Spent: 10m 
  Work Description: cxorm commented on pull request #1499: HDDS-1738. Add 
nullable annotation for OMResponse classes.
URL: https://github.com/apache/hadoop/pull/1499
 
 
   Modify classes under 
   
hadoop/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316397)
Remaining Estimate: 0h
Time Spent: 10m

> Add nullable annotation for OMResponse classes
> --
>
> Key: HDDS-1738
> URL: https://issues.apache.org/jira/browse/HDDS-1738
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is to address [~arp] comment.
> A future improvement unrelated to your patch - replace null with Optional. Or 
> at least add [@nullable|https://github.com/nullable] annotation on the 
> parameter.
> Add @nullable for all OMResponse fields, where fields can be null.
>  
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12862) CacheDirective becomes invalid when NN restart or failover

2019-09-22 Thread Rohith Sharma K S (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HDFS-12862:
-
Fix Version/s: (was: 3.2.1)

> CacheDirective becomes invalid when NN restart or failover
> --
>
> Key: HDFS-12862
> URL: https://issues.apache.org/jira/browse/HDFS-12862
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, hdfs
>Affects Versions: 2.7.1
> Environment: 
>Reporter: Wang XL
>Assignee: Wang XL
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: HDFS-12862-branch-2.7.1.001.patch, 
> HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, 
> HDFS-12862-trunk.004.patch, HDFS-12862.005.patch, HDFS-12862.006.patch, 
> HDFS-12862.007.patch, HDFS-12862.branch-3.1.patch
>
>
> The logic in FSNDNCacheOp#modifyCacheDirective is not correct.  when modify 
> cacheDirective,the expiration in directive may be a relative expiryTime, and 
> EditLog will serial a relative expiry time.
> {code:java}
> // Some comments here
> static void modifyCacheDirective(
>   FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo 
> directive,
>   EnumSet flags, boolean logRetryCache) throws IOException {
> final FSPermissionChecker pc = getFsPermissionChecker(fsn);
> cacheManager.modifyDirective(directive, pc, flags);
> fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
>   }
> {code}
> But when SBN replay the log ,it will invoke 
> FSImageSerialization#readCacheDirectiveInfo  as a absolute expiryTime.It will 
> result in the inconsistency .
> {code:java}
>   public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in)
>   throws IOException {
> CacheDirectiveInfo.Builder builder =
> new CacheDirectiveInfo.Builder();
> builder.setId(readLong(in));
> int flags = in.readInt();
> if ((flags & 0x1) != 0) {
>   builder.setPath(new Path(readString(in)));
> }
> if ((flags & 0x2) != 0) {
>   builder.setReplication(readShort(in));
> }
> if ((flags & 0x4) != 0) {
>   builder.setPool(readString(in));
> }
> if ((flags & 0x8) != 0) {
>   builder.setExpiration(
>   CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)));
> }
> if ((flags & ~0xF) != 0) {
>   throw new IOException("unknown flags set in " +
>   "ModifyCacheDirectiveInfoOp: " + flags);
> }
> return builder.build();
>   }
> {code}
> In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, 
> logRetryCache)  may serial a relative expiry time,But  
> builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)))
>read it as a absolute expiryTime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12862) CacheDirective becomes invalid when NN restart or failover

2019-09-22 Thread Rohith Sharma K S (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935543#comment-16935543
 ] 

Rohith Sharma K S commented on HDFS-12862:
--

branch-3.2 points to 3.2.2 version.. Removed 3.2.1 as it got released. If it 
fixed in branch-3.2, please update to 3.2.2

> CacheDirective becomes invalid when NN restart or failover
> --
>
> Key: HDFS-12862
> URL: https://issues.apache.org/jira/browse/HDFS-12862
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, hdfs
>Affects Versions: 2.7.1
> Environment: 
>Reporter: Wang XL
>Assignee: Wang XL
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: HDFS-12862-branch-2.7.1.001.patch, 
> HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, 
> HDFS-12862-trunk.004.patch, HDFS-12862.005.patch, HDFS-12862.006.patch, 
> HDFS-12862.007.patch, HDFS-12862.branch-3.1.patch
>
>
> The logic in FSNDNCacheOp#modifyCacheDirective is not correct.  when modify 
> cacheDirective,the expiration in directive may be a relative expiryTime, and 
> EditLog will serial a relative expiry time.
> {code:java}
> // Some comments here
> static void modifyCacheDirective(
>   FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo 
> directive,
>   EnumSet flags, boolean logRetryCache) throws IOException {
> final FSPermissionChecker pc = getFsPermissionChecker(fsn);
> cacheManager.modifyDirective(directive, pc, flags);
> fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
>   }
> {code}
> But when SBN replay the log ,it will invoke 
> FSImageSerialization#readCacheDirectiveInfo  as a absolute expiryTime.It will 
> result in the inconsistency .
> {code:java}
>   public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in)
>   throws IOException {
> CacheDirectiveInfo.Builder builder =
> new CacheDirectiveInfo.Builder();
> builder.setId(readLong(in));
> int flags = in.readInt();
> if ((flags & 0x1) != 0) {
>   builder.setPath(new Path(readString(in)));
> }
> if ((flags & 0x2) != 0) {
>   builder.setReplication(readShort(in));
> }
> if ((flags & 0x4) != 0) {
>   builder.setPool(readString(in));
> }
> if ((flags & 0x8) != 0) {
>   builder.setExpiration(
>   CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)));
> }
> if ((flags & ~0xF) != 0) {
>   throw new IOException("unknown flags set in " +
>   "ModifyCacheDirectiveInfoOp: " + flags);
> }
> return builder.build();
>   }
> {code}
> In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, 
> logRetryCache)  may serial a relative expiry time,But  
> builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)))
>read it as a absolute expiryTime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2019-09-22 Thread Rohith Sharma K S (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935541#comment-16935541
 ] 

Rohith Sharma K S commented on HDFS-13732:
--

branch-3.2 points to 3.2.2 version.. Removed 3.2.1 as it got released. 

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13287) TestINodeFile#testGetBlockType results in NPE when run alone

2019-09-22 Thread Rohith Sharma K S (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HDFS-13287:
-
Fix Version/s: (was: 3.2.1)

> TestINodeFile#testGetBlockType results in NPE when run alone
> 
>
> Key: HDFS-13287
> URL: https://issues.apache.org/jira/browse/HDFS-13287
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Minor
> Fix For: 3.3.0, 3.1.3
>
> Attachments: HDFS-13287.01.patch
>
>
> When TestINodeFile#testGetBlockType is run by itself, it results in the 
> following error:
> {code:java}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.218 
> s <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestINodeFile
> [ERROR] 
> testGetBlockType(org.apache.hadoop.hdfs.server.namenode.TestINodeFile)  Time 
> elapsed: 0.023 s  <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager.getPolicyInfoByID(ErasureCodingPolicyManager.java:220)
> at 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager.getByID(ErasureCodingPolicyManager.java:208)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(INodeFile.java:207)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.(INodeFile.java:266)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestINodeFile.createStripedINodeFile(TestINodeFile.java:112)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestINodeFile.testGetBlockType(TestINodeFile.java:299)
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2019-09-22 Thread Rohith Sharma K S (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935542#comment-16935542
 ] 

Rohith Sharma K S commented on HDFS-13732:
--

branch-3.2 points to 3.2.2 version.. Removed 3.2.1 as it got released. If it 
fixed in branch-3.2, please update to 3.2.2

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2019-09-22 Thread Rohith Sharma K S (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HDFS-13732:
-
Fix Version/s: (was: 3.2.1)

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-09-22 Thread Rohith Sharma K S (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HDFS-14509:
-
Target Version/s: 2.10.0, 3.3.0, 3.1.3, 3.2.2  (was: 2.10.0, 3.3.0, 3.2.1, 
3.1.3)

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
> Attachments: HDFS-14509-001.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14867) RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.

2019-09-22 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14867:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.
> 
>
> Key: HDFS-14867
> URL: https://issues.apache.org/jira/browse/HDFS-14867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14867.001.patch
>
>
> It's a simple bug, caused by RouterClientProtocl#821, a broken path is passed 
> to getMountPointStatus(). We should pass the full path of the child. 
> The original test case TestRouterMountTable.testMountTablePermissionsNoDest() 
> doesn't cover enough cases, so I extend it. Test case 
> TestRouterMountTable.testGetMountPointStatusWithIOException() is also broken, 
> userB and groupB should be set to entry /testA/testB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14867) RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.

2019-09-22 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935528#comment-16935528
 ] 

Jinglun edited comment on HDFS-14867 at 9/23/19 3:42 AM:
-

Hi [~ayushtkn], thanks very much for reminding! Yes I believe they are the same 
problem. I'll make my comments under HDFS-14739.


was (Author: lijinglun):
Hi [~ayushtkn], thanks very much for reminding! Yes I believe they are the same 
problem, but the patch seems a little complicate and lack of ut. I'll make my 
comments under HDFS-14739.

> RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.
> 
>
> Key: HDFS-14867
> URL: https://issues.apache.org/jira/browse/HDFS-14867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14867.001.patch
>
>
> It's a simple bug, caused by RouterClientProtocl#821, a broken path is passed 
> to getMountPointStatus(). We should pass the full path of the child. 
> The original test case TestRouterMountTable.testMountTablePermissionsNoDest() 
> doesn't cover enough cases, so I extend it. Test case 
> TestRouterMountTable.testGetMountPointStatusWithIOException() is also broken, 
> userB and groupB should be set to entry /testA/testB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-09-22 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935534#comment-16935534
 ] 

Jinglun commented on HDFS-14739:


Hi [~xuzq_zander], I ran into the same problem recently. The root cause is in 
RouterClientProtocol.getListing() it passes a broken child to 
getMountPointStatus(). The patchv03 RouterClientProtocol.getListing()#829~831 
makes sense to me, you pass the full path of child to it.

If I understand right, v03 RouterClientProtocol.getListing()#748~764 is used 
when there is no default namespace nor a / mount. Shouldn't we simply let it 
fail when there is no default and / mount ? I mean why we need to catch the 
IOException, could we just let it throw ?

About the unit test, I've make 
TestRouterMountTable.testMountTablePermissionsNoDest() stronger in HDFS-14867. 
Hope it can help, just for reference.

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14739-trunk-001.patch, HDFS-14739-trunk-002.patch, 
> HDFS-14739-trunk-003.patch, image-2019-08-16-17-15-50-614.png, 
> image-2019-08-16-17-16-00-863.png, image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?focusedWorklogId=316387=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316387
 ]

ASF GitHub Bot logged work on HDDS-1569:


Author: ASF GitHub Bot
Created on: 23/Sep/19 03:24
Start Date: 23/Sep/19 03:24
Worklog Time Spent: 10m 
  Work Description: timmylicheng commented on pull request #1431: HDDS-1569 
Support creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#discussion_r326940585
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java
 ##
 @@ -81,6 +82,9 @@
   @Before
   public void setup() throws IOException, AuthenticationException {
 OzoneConfiguration conf = new OzoneConfiguration();
+conf.setInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+1000);
 
 Review comment:
   Already updated
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316387)
Time Spent: 2h 40m  (was: 2.5h)

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?focusedWorklogId=316386=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316386
 ]

ASF GitHub Bot logged work on HDDS-1569:


Author: ASF GitHub Bot
Created on: 23/Sep/19 03:23
Start Date: 23/Sep/19 03:23
Worklog Time Spent: 10m 
  Work Description: timmylicheng commented on pull request #1431: HDDS-1569 
Support creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#discussion_r326940515
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2PipelineMap.java
 ##
 @@ -71,6 +71,10 @@ public synchronized void addPipeline(Pipeline pipeline) {
   UUID dnId = details.getUuid();
   dn2ObjectMap.computeIfAbsent(dnId, k -> ConcurrentHashMap.newKeySet())
   .add(pipeline.getId());
+  dn2ObjectMap.computeIfPresent(dnId, (k, v) -> {
 
 Review comment:
   From Java 
doc:https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentHashMap.html#computeIfAbsent-K-java.util.function.Function-
 , it looks like computeIfAbsent is only adding absent member. Line 74 is 
trying to merge a pipelineId into an existed candidate. This happens when one 
datanode is assigned to multiple pipelines, which is a classic scenario for 
multiraft.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316386)
Time Spent: 2.5h  (was: 2h 20m)

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14867) RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.

2019-09-22 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935528#comment-16935528
 ] 

Jinglun commented on HDFS-14867:


Hi [~ayushtkn], thanks very much for reminding! Yes I believe they are the same 
problem, but the patch seems a little complicate and lack of ut. I'll make my 
comments under HDFS-14739.

> RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.
> 
>
> Key: HDFS-14867
> URL: https://issues.apache.org/jira/browse/HDFS-14867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14867.001.patch
>
>
> It's a simple bug, caused by RouterClientProtocl#821, a broken path is passed 
> to getMountPointStatus(). We should pass the full path of the child. 
> The original test case TestRouterMountTable.testMountTablePermissionsNoDest() 
> doesn't cover enough cases, so I extend it. Test case 
> TestRouterMountTable.testGetMountPointStatusWithIOException() is also broken, 
> userB and groupB should be set to entry /testA/testB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14867) RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.

2019-09-22 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935526#comment-16935526
 ] 

Hadoop QA commented on HDFS-14867:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 28s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14867 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981023/HDFS-14867.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 78bff84a4be2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 659c888 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27936/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27936/testReport/ |
| Max. process+thread count | 1585 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27936/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HDFS-14867) RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.

2019-09-22 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935517#comment-16935517
 ] 

Ayush Saxena commented on HDFS-14867:
-

Is this same as HDFS-14739?

> RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.
> 
>
> Key: HDFS-14867
> URL: https://issues.apache.org/jira/browse/HDFS-14867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14867.001.patch
>
>
> It's a simple bug, caused by RouterClientProtocl#821, a broken path is passed 
> to getMountPointStatus(). We should pass the full path of the child. 
> The original test case TestRouterMountTable.testMountTablePermissionsNoDest() 
> doesn't cover enough cases, so I extend it. Test case 
> TestRouterMountTable.testGetMountPointStatusWithIOException() is also broken, 
> userB and groupB should be set to entry /testA/testB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-22 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14814:
---
Summary: RBF: RouterQuotaUpdateService supports inherited rule.  (was: RBF: 
The quota should be set the same as the nearest parent in Global Quota.)

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: The quota should be set the same as the nearest parent in Global Quota.

2019-09-22 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935506#comment-16935506
 ] 

Jinglun commented on HDFS-14814:


Hi [~ayushtkn], your understand is right, the 1,2,3 idea is exactly what I 
mean. 

In this jira, I intend to let RouterQuotaUpdateService fix all the bad quotas. 
If I want to fix quota, I must know what is the right quota. So I use the 
*inherited rule*  to determine whether a quota is correct or not.

To implement the inherited rule, we need to update all rpcs related to global 
quota, including RouterAdmin updates mount tables, 
RouterRpcServer.getQuotaUsage(), RouterRpcServer.setQuota() and 
RouterQuotaUpdateService fixing. I don't want to add all the changes in one big 
patch so I want to start with RouterQuotaUpdateService. 

RouterQuotaUpdateService can be seen as the final judge. It will always make 
everything right. So I want to start with it.

> RBF: The quota should be set the same as the nearest parent in Global Quota.
> 
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14867) RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.

2019-09-22 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14867:
---
Attachment: HDFS-14867.001.patch
Status: Patch Available  (was: Open)

> RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.
> 
>
> Key: HDFS-14867
> URL: https://issues.apache.org/jira/browse/HDFS-14867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14867.001.patch
>
>
> It's a simple bug, caused by RouterClientProtocl#821, a broken path is passed 
> to getMountPointStatus(). We should pass the full path of the child. 
> The original test case TestRouterMountTable.testMountTablePermissionsNoDest() 
> doesn't cover enough cases, so I extend it. Test case 
> TestRouterMountTable.testGetMountPointStatusWithIOException() is also broken, 
> userB and groupB should be set to entry /testA/testB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14867) RBF: RouterRpcServer.getListing() returns wrong owner, group and permission.

2019-09-22 Thread Jinglun (Jira)
Jinglun created HDFS-14867:
--

 Summary: RBF: RouterRpcServer.getListing() returns wrong owner, 
group and permission.
 Key: HDFS-14867
 URL: https://issues.apache.org/jira/browse/HDFS-14867
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jinglun
Assignee: Jinglun


It's a simple bug, caused by RouterClientProtocl#821, a broken path is passed 
to getMountPointStatus(). We should pass the full path of the child. 
The original test case TestRouterMountTable.testMountTablePermissionsNoDest() 
doesn't cover enough cases, so I extend it. Test case 
TestRouterMountTable.testGetMountPointStatusWithIOException() is also broken, 
userB and groupB should be set to entry /testA/testB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-09-22 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935435#comment-16935435
 ] 

Wei-Chiu Chuang commented on HDFS-14858:


should we consider this as a blocker for 3.2.1 and 3.1.3 releases?

> [SBN read] Allow configurably enable/disable AlignmentContext on NameNode
> -
>
> Key: HDFS-14858
> URL: https://issues.apache.org/jira/browse/HDFS-14858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14858.001.patch
>
>
> As brought up under HDFS-14277, we should make sure SBN read has no 
> performance impact when it is not enabled. One potential overhead of SBN read 
> is maintaining and updating additional state status on NameNode. 
> Specifically, this is done by creating/updating/checking a 
> {{GlobalStateIdContext}} instance. Currently, even without enabling SBN read, 
> this logic is still be checked.  We can make this configurable so that when 
> SBN read is not enabled, there is no such overhead and everything works as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14818) Check native pmdk lib by 'hadoop checknative' command

2019-09-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935365#comment-16935365
 ] 

Hudson commented on HDFS-14818:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17353 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17353/])
HDFS-14818. Check native pmdk lib by 'hadoop checknative' command. (rakeshr: 
rev 659c88801d008bb352d10a1cb3bd0e401486cc9b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/pmdk_load.c
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/pmdk_load.h
* (edit) hadoop-common-project/hadoop-common/src/CMakeLists.txt
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java


> Check native pmdk lib by 'hadoop checknative' command
> -
>
> Key: HDFS-14818
> URL: https://issues.apache.org/jira/browse/HDFS-14818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: native
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Minor
> Attachments: HDFS-14818.000.patch, HDFS-14818.001.patch, 
> HDFS-14818.002.patch, HDFS-14818.003.patch, HDFS-14818.004.patch, 
> check_native_after_building_with_PMDK.png, 
> check_native_after_building_with_PMDK_using_NAME_instead_of_REALPATH.png, 
> check_native_after_building_without_PMDK.png
>
>
> Currently, 'hadoop checknative' command supports checking native libs, such 
> as zlib, snappy, openssl and ISA-L etc. It's necessary to include pmdk lib in 
> the checking.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14818) Check native pmdk lib by 'hadoop checknative' command

2019-09-22 Thread Rakesh R (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-14818:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~PhiloHe] for the contribution. +1 LGTM.

Committed to trunk

> Check native pmdk lib by 'hadoop checknative' command
> -
>
> Key: HDFS-14818
> URL: https://issues.apache.org/jira/browse/HDFS-14818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: native
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14818.000.patch, HDFS-14818.001.patch, 
> HDFS-14818.002.patch, HDFS-14818.003.patch, HDFS-14818.004.patch, 
> check_native_after_building_with_PMDK.png, 
> check_native_after_building_with_PMDK_using_NAME_instead_of_REALPATH.png, 
> check_native_after_building_without_PMDK.png
>
>
> Currently, 'hadoop checknative' command supports checking native libs, such 
> as zlib, snappy, openssl and ISA-L etc. It's necessary to include pmdk lib in 
> the checking.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-09-22 Thread Nanda kumar (Jira)
Nanda kumar created HDDS-2164:
-

 Summary: om.db.checkpoints is getting filling up fast
 Key: HDDS-2164
 URL: https://issues.apache.org/jira/browse/HDDS-2164
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Nanda kumar


{{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2001) Update Ratis version to 0.4.0

2019-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2001?focusedWorklogId=316249=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316249
 ]

ASF GitHub Bot logged work on HDDS-2001:


Author: ASF GitHub Bot
Created on: 22/Sep/19 12:07
Start Date: 22/Sep/19 12:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1497: HDDS-2001. 
Update Ratis version to 0.4.0.
URL: https://github.com/apache/hadoop/pull/1497#issuecomment-533876176
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 157 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ ozone-0.4.1 Compile Tests _ |
   | 0 | mvndep | 85 | Maven dependency ordering for branch |
   | +1 | mvninstall | 732 | ozone-0.4.1 passed |
   | +1 | compile | 444 | ozone-0.4.1 passed |
   | +1 | checkstyle | 91 | ozone-0.4.1 passed |
   | +1 | mvnsite | 0 | ozone-0.4.1 passed |
   | +1 | shadedclient | 1049 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 186 | ozone-0.4.1 passed |
   | 0 | spotbugs | 438 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 676 | ozone-0.4.1 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 42 | Maven dependency ordering for patch |
   | +1 | mvninstall | 567 | the patch passed |
   | +1 | compile | 398 | the patch passed |
   | +1 | javac | 398 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 648 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 643 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 190 | hadoop-hdds in the patch failed. |
   | -1 | unit | 282 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6695 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1497 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 3f9cc7b12697 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | ozone-0.4.1 / 2eb41fb |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/1/testReport/ |
   | Max. process+thread count | 1150 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/container-service hadoop-ozone 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1497/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316249)
Time Spent: 1h 20m  (was: 1h 10m)

> Update Ratis version to 0.4.0
> -
>
> Key: HDDS-2001
> URL: https://issues.apache.org/jira/browse/HDDS-2001
> Project: Hadoop Distributed Data Store
>  Issue Type: 

[jira] [Work logged] (HDDS-2001) Update Ratis version to 0.4.0

2019-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2001?focusedWorklogId=316239=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316239
 ]

ASF GitHub Bot logged work on HDDS-2001:


Author: ASF GitHub Bot
Created on: 22/Sep/19 10:13
Start Date: 22/Sep/19 10:13
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1483: HDDS-2001. 
Update Ratis version to 0.4.0.
URL: https://github.com/apache/hadoop/pull/1483#issuecomment-533867812
 
 
   @anuengineer Thanks for the review. There are conflicts while backporting 
this to ozone-0.4.1, created #1497 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316239)
Time Spent: 1h 10m  (was: 1h)

> Update Ratis version to 0.4.0
> -
>
> Key: HDDS-2001
> URL: https://issues.apache.org/jira/browse/HDDS-2001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Update Ratis version to 0.4.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2001) Update Ratis version to 0.4.0

2019-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2001?focusedWorklogId=316238=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316238
 ]

ASF GitHub Bot logged work on HDDS-2001:


Author: ASF GitHub Bot
Created on: 22/Sep/19 10:10
Start Date: 22/Sep/19 10:10
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1497: 
HDDS-2001. Update Ratis version to 0.4.0.
URL: https://github.com/apache/hadoop/pull/1497
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316238)
Time Spent: 1h  (was: 50m)

> Update Ratis version to 0.4.0
> -
>
> Key: HDDS-2001
> URL: https://issues.apache.org/jira/browse/HDDS-2001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Update Ratis version to 0.4.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-22 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Description: 
When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
block in that datanode will be replicated many times.

// added 2019/09/19
I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
simultaneously. 
 !scheduleReconstruction.png! 

 !fsck-file.png! 

  was:
When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
that datanode will be replicated infinitely.

// added 2019/09/19
I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
simultaneously. 
 !scheduleReconstruction.png! 

 !fsck-file.png! 


> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-22 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Summary: Erasure Coding: the internal block is replicated many times when 
datanode is decommissioning  (was: Erasure Coding: replicate block infinitely 
when datanode being decommissioning)

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-09-22 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reopened HDDS-2020:
---

Reopening for ozone-0.4.1 branch.

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-2001) Update Ratis version to 0.4.0

2019-09-22 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reopened HDDS-2001:
---

> Update Ratis version to 0.4.0
> -
>
> Key: HDDS-2001
> URL: https://issues.apache.org/jira/browse/HDDS-2001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Update Ratis version to 0.4.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2001) Update Ratis version to 0.4.0

2019-09-22 Thread Nanda kumar (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935271#comment-16935271
 ] 

Nanda kumar commented on HDDS-2001:
---

Reopening for ozone-0.4.1 branch.

> Update Ratis version to 0.4.0
> -
>
> Key: HDDS-2001
> URL: https://issues.apache.org/jira/browse/HDDS-2001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Update Ratis version to 0.4.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-22 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935268#comment-16935268
 ] 

Hadoop QA commented on HDFS-14853:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980990/HDFS-14853.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c2ed4217f2aa 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a94aa1f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27935/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27935/testReport/ |
| Max. process+thread count | 2758 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-22 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935226#comment-16935226
 ] 

Ranith Sardar commented on HDFS-14853:
--

Thanks [~ayushtkn], in the new patch, handled the checkstyle warning!

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-22 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Attachment: HDFS-14853.003.patch

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-22 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935225#comment-16935225
 ] 

Hadoop QA commented on HDFS-14850:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 53s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServerWebServerWithRandomSecret |
|   | hadoop.fs.http.server.TestHttpFSServerNoACLs |
|   | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServer |
|   | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServerNoXAttrs |
|   | hadoop.lib.service.hadoop.TestFileSystemAccessService |
|   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServerWebServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980987/HDFS-14850.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0fc6a357d30a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |