[jira] [Updated] (HDFS-12248) SNN will not upload fsimage on IOE and Interrupted exceptions

2017-08-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12248:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed {{trunk}}.. [~vinayrpet] thanks lot for review and thanks to others 
for additional review.

> SNN will not upload fsimage on IOE and Interrupted exceptions
> -
>
> Key: HDFS-12248
> URL: https://issues.apache.org/jira/browse/HDFS-12248
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12248-002.patch, HDFS-12248-003.patch, 
> HDFS-12248.patch
>
>
> Related to  HDFS-9787. When fsimage uploading to ANN, if there is any 
> interrupt or IOE comes {{isPrimaryCheckPointer}} set to 
> {{false}}.Rollingupgrade triggered same time then It does the checkpoint 
> without sending the fsimage since {{sendRequest}} will be {{false}}.
> So,here {{rollback}} image will not sent to ANN.
> {code}
>   } catch (ExecutionException e) {
> ioe = new IOException("Exception during image upload: " + 
> e.getMessage(),
> e.getCause());
> break;
>   } catch (InterruptedException e) {
> ie = e;
> break;
>   }
> }
> lastUploadTime = monotonicNow();
> // we are primary if we successfully updated the ANN
> this.isPrimaryCheckPointer = success;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-25 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142630#comment-16142630
 ] 

Anu Engineer commented on HDFS-12282:
-

cc: [~nandakumar131] Once delete capability is in, we should support delete in 
Corona. We might need to have some sort of deleted block JMX counter on data 
node; so we can assert that the block got deleted from the data node.


> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, Block delete 
> via HB between SCM and DN v2.png, HDFS-12282-HDFS-7240.001.patch, 
> HDFS-12282-HDFS-7240.002.patch, HDFS-12282-HDFS-7240.003.patch, 
> HDFS-12282-HDFS-7240.004.patch, HDFS-12282-HDFS-7240.005.patch, 
> HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12333) Ozone: Extend Datanode web interface with SCM information

2017-08-25 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142629#comment-16142629
 ] 

Anu Engineer commented on HDFS-12333:
-

+1, on this change. I will wait for [~msingh] to take a look. As far as I am 
concerned, it is ok to have a simple page like this for SCM. We might have to 
expose some of this vial SCM cli at some point of time. cc: [~xyao],  
[~vagarychen], [~nandakumar131] comments ? 

> Ozone: Extend Datanode web interface with SCM information
> -
>
> Key: HDFS-12333
> URL: https://issues.apache.org/jira/browse/HDFS-12333
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12333-HDFS-7240.001.patch, ozonedatanode.png
>
>
> Current Datanode web interface shows information about the Block Pools:
> * Namenode Address
> * BlockPoolID
> * ActorState
> * Last Heartbeat
> * Last Block Report
> I propose in this jira to add the same information about the SCM (if the 
> datanode is a member of ozone cluster). It would help to check the current 
> state of the datanode (is ozone enabled? Is there an active connection to the 
> SCM?)
> 1. Suggested information to display:
> * SCM hostname/address
> * EndpointState (GETVERSION, REGISTER, HEARTBEAT, SHUTDOWN)
> * version (from the VersoinResponse)
> * missedCount
> They could be displayed with publishing JMX information from 
> SCMConnectionManager or EndpointStateMachines.
> 2. StorageLocationReport[] from the ContainerLocationManager also should be 
> exposed over JMX and displayed on the web interface:
> * id
> * failed (bool)
> * capacity
> * scmUsed
> * remaining
> 3. Possible report of Last Heartbeat/Container report should be investigated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11834) Ozone: Fix TestArchive#testArchive

2017-08-25 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142625#comment-16142625
 ] 

Anu Engineer commented on HDFS-11834:
-

The code changes look correct, but I was just wondering why we the 
RandomStringUtils was not working and why the reset is needed in between. 
I am +1 from the Jenkins point of view, it has been a while since we had a 
clean test run like this. Thank you for fixing this issue. Just curious to 
understand why the older code was working on Mac and failing on Linux.

> Ozone: Fix TestArchive#testArchive
> --
>
> Key: HDFS-11834
> URL: https://issues.apache.org/jira/browse/HDFS-11834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11834-HDFS-7240.001.patch
>
>
> This Alder32 based CRC check does not mismatch on MAC but does on some 
> Jenkins machines based on some recent Jenkins run:
> {code}
> org.apache.hadoop.scm.TestArchive.testArchive
> Failing for the past 1 build (Since Failed#19352 )
> Took 21 sec.
> Error Message
> expected:<3488429799> but was:<2161587943>
> Stacktrace
> java.lang.AssertionError: expected:<3488429799> but was:<2161587943>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at org.apache.hadoop.scm.TestArchive.testArchive(TestArchive.java:104)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11834) Ozone: Fix TestArchive#testArchive

2017-08-25 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-11834:
---

Assignee: Xiaoyu Yao  (was: Anu Engineer)

> Ozone: Fix TestArchive#testArchive
> --
>
> Key: HDFS-11834
> URL: https://issues.apache.org/jira/browse/HDFS-11834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11834-HDFS-7240.001.patch
>
>
> This Alder32 based CRC check does not mismatch on MAC but does on some 
> Jenkins machines based on some recent Jenkins run:
> {code}
> org.apache.hadoop.scm.TestArchive.testArchive
> Failing for the past 1 build (Since Failed#19352 )
> Took 21 sec.
> Error Message
> expected:<3488429799> but was:<2161587943>
> Stacktrace
> java.lang.AssertionError: expected:<3488429799> but was:<2161587943>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at org.apache.hadoop.scm.TestArchive.testArchive(TestArchive.java:104)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12282) Ozone: DeleteKey-4: Block delete via HB between SCM and DN

2017-08-25 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142623#comment-16142623
 ] 

Anu Engineer commented on HDFS-12282:
-

[~cheersyang] Thank you for taking care of all the comments. I appreciate the 
move from HB thread to its own background service.
+1 on v5 patch. Please feel free to commit when you get a chance.


> Ozone: DeleteKey-4: Block delete via HB between SCM and DN
> --
>
> Key: HDFS-12282
> URL: https://issues.apache.org/jira/browse/HDFS-12282
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: Block delete via HB between SCM and DN.png, Block delete 
> via HB between SCM and DN v2.png, HDFS-12282-HDFS-7240.001.patch, 
> HDFS-12282-HDFS-7240.002.patch, HDFS-12282-HDFS-7240.003.patch, 
> HDFS-12282-HDFS-7240.004.patch, HDFS-12282-HDFS-7240.005.patch, 
> HDFS-12282.WIP.patch
>
>
> This is the task 3 in the design doc, implements the SCM to datanode 
> interactions. Including
> # SCM sends block deletion message via HB to datanode
> # datanode changes block state to deleting when processes the HB response
> # datanode sends deletion ACKs back to SCM
> # SCM handles ACKs and removes blocks in DB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142620#comment-16142620
 ] 

Xiao Chen edited comment on HDFS-12336 at 8/26/17 5:12 AM:
---

Thanks for the new patch Wellington, looks pretty good to me. I think we're 
really close. (Yea, HDFS-10899 was me, sorry for the rebase :) )

Comments, mostly nits:
- I think we should pass in {{zonePath}}, instead of {{inode.getFullPathName}} 
to {{isValidAbsolutePath}}. The latter is more expensive, and later 
{{getINodesInPath}} is resolving the former.
- Nit: I think we can leave out the outer () in {{return (path != null && 
path.startsWith(Path.SEPARATOR));}}
- Nit: Test could use {{assertNotEquals}} instead of 
{{assertFalse(x.equals(z))}}.
- Please fix checkstyle while you're at it. (I can't explain why, but we really 
love the '.' at the end of the first sentence)


was (Author: xiaochen):
Thanks for the new patch Wellington, looks pretty good to me. I think we're 
really close. (Yea, HDFS-10899 was me, sorry for the rebase :) )

Comments, mostly nits:
- I think we should pass in {{zonePath}}, instead of {{inode.getFullPathName}} 
to {{isValidAbsolutePath}}. The latter is more expensive, and later 
{{getINodesInPath}} is resolving the former.
- Nit: I think we can leave out the outer () in {{return (path != null && 
path.startsWith(Path.SEPARATOR));}}
- Nit: Test could use {{assertNotEquals}} instead of 
{{assertFalse(x.equals(y))}}.
- Please fix checkstyle while you're at it. (I can't explain why, but we really 
love the '.' at the end of the first sentence)

> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> -
>
> Key: HDFS-12336
> URL: https://issues.apache.org/jira/browse/HDFS-12336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-12336.001.patch, HDFS-12336.002.patch, 
> HDFS-12336.003.patch
>
>
> The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
> but still under a snapshot is not a direct child of the snapshottable 
> directory.
> Here the code snippet proposed on HDFS-11197 that would avoid the error 
> reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
> snapshot:
> {noformat}
>   INode lastINode = null;
>   if (inode.getParent() != null || inode.isRoot()) {
> INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
> lastINode = iip.getLastINode();
>   }
>   if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
> continue;
>   }
> {noformat} 
> It will ignore EZs when it's a direct child of a snapshot, because its parent 
> inode will be null, and it isn't the root inode. However, if the EZ is not 
> directly under snapshottable directory, its parent will not be null, and it 
> will pass this check, so it will fail further due *absolute path required* 
> validation error.
> I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142620#comment-16142620
 ] 

Xiao Chen edited comment on HDFS-12336 at 8/26/17 5:12 AM:
---

Thanks for the new patch Wellington, looks pretty good to me. I think we're 
really close. (Yea, HDFS-10899 was me, sorry for the rebase :) )

Comments, mostly nits:
- I think we should pass in {{zonePath}}, instead of {{inode.getFullPathName}} 
to {{isValidAbsolutePath}}. The latter is more expensive, and later 
{{getINodesInPath}} is resolving the former.
- Nit: I think we can leave out the outer () in {{return (path != null && 
path.startsWith(Path.SEPARATOR));}}
- Nit: Test could use {{assertNotEquals}} instead of 
{{assertFalse(x.equals(y))}}.
- Please fix checkstyle while you're at it. (I can't explain why, but we really 
love the '.' at the end of the first sentence)


was (Author: xiaochen):
Thanks for the new patch Wellington, looks pretty good to me. I think we're 
really close.

Comments, mostly nits:
- I think we should pass in {{zonePath}}, instead of {{inode.getFullPathName}} 
to {{isValidAbsolutePath}}. The latter is more expensive, and later 
{{getINodesInPath}} is resolving the former.
- Nit: I think we can leave out the outer () in {{return (path != null && 
path.startsWith(Path.SEPARATOR));}}
- Nit: Test could use {{assertNotEquals}} instead of 
{{assertFalse(x.equals(y))}}.
- Please fix checkstyle while you're at it. (I can't explain why, but we really 
love the '.' at the end of the first sentence)

> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> -
>
> Key: HDFS-12336
> URL: https://issues.apache.org/jira/browse/HDFS-12336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-12336.001.patch, HDFS-12336.002.patch, 
> HDFS-12336.003.patch
>
>
> The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
> but still under a snapshot is not a direct child of the snapshottable 
> directory.
> Here the code snippet proposed on HDFS-11197 that would avoid the error 
> reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
> snapshot:
> {noformat}
>   INode lastINode = null;
>   if (inode.getParent() != null || inode.isRoot()) {
> INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
> lastINode = iip.getLastINode();
>   }
>   if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
> continue;
>   }
> {noformat} 
> It will ignore EZs when it's a direct child of a snapshot, because its parent 
> inode will be null, and it isn't the root inode. However, if the EZ is not 
> directly under snapshottable directory, its parent will not be null, and it 
> will pass this check, so it will fail further due *absolute path required* 
> validation error.
> I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142620#comment-16142620
 ] 

Xiao Chen commented on HDFS-12336:
--

Thanks for the new patch Wellington, looks pretty good to me. I think we're 
really close.

Comments, mostly nits:
- I think we should pass in {{zonePath}}, instead of {{inode.getFullPathName}} 
to {{isValidAbsolutePath}}. The latter is more expensive, and later 
{{getINodesInPath}} is resolving the former.
- Nit: I think we can leave out the outer () in {{return (path != null && 
path.startsWith(Path.SEPARATOR));}}
- Nit: Test could use {{assertNotEquals}} instead of 
{{assertFalse(x.equals(y))}}.
- Please fix checkstyle while you're at it. (I can't explain why, but we really 
love the '.' at the end of the first sentence)

> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> -
>
> Key: HDFS-12336
> URL: https://issues.apache.org/jira/browse/HDFS-12336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-12336.001.patch, HDFS-12336.002.patch, 
> HDFS-12336.003.patch
>
>
> The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
> but still under a snapshot is not a direct child of the snapshottable 
> directory.
> Here the code snippet proposed on HDFS-11197 that would avoid the error 
> reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
> snapshot:
> {noformat}
>   INode lastINode = null;
>   if (inode.getParent() != null || inode.isRoot()) {
> INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
> lastINode = iip.getLastINode();
>   }
>   if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
> continue;
>   }
> {noformat} 
> It will ignore EZs when it's a direct child of a snapshot, because its parent 
> inode will be null, and it isn't the root inode. However, if the EZ is not 
> directly under snapshottable directory, its parent will not be null, and it 
> will pass this check, so it will fail further due *absolute path required* 
> validation error.
> I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12359) Re-encryption should operate with minimum KMS ACL requirements.

2017-08-25 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12359:
-
Attachment: HDFS-12359.01.patch

Attach a patch for the fix:
- can (and should) simply drain the client cache without having to send a 
{{invalidateCache}} to the server. {{invalidateCache}} is the responsibility 
for key rolling.
- after cache is drained, generate a new edek, and use the version of that. 
This eliminates the need for {{getCurrentKey}}

> Re-encryption should operate with minimum KMS ACL requirements.
> ---
>
> Key: HDFS-12359
> URL: https://issues.apache.org/jira/browse/HDFS-12359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12359.01.patch
>
>
> This was caught from KMS acl testing.
> HDFS-10899 gets the current key versions from KMS directly, which requires 
> {{READ}} acls.
> It also calls invalidateCache, which requires {{MANAGEMENT}} acls.
> We should fix re-encryption to not require additional ACLs than original 
> encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12359) Re-encryption should operate with minimum KMS ACL requirements.

2017-08-25 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12359:
-
Status: Patch Available  (was: Open)

> Re-encryption should operate with minimum KMS ACL requirements.
> ---
>
> Key: HDFS-12359
> URL: https://issues.apache.org/jira/browse/HDFS-12359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12359.01.patch
>
>
> This was caught from KMS acl testing.
> HDFS-10899 gets the current key versions from KMS directly, which requires 
> {{READ}} acls.
> It also calls invalidateCache, which requires {{MANAGEMENT}} acls.
> We should fix re-encryption to not require additional ACLs than original 
> encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142614#comment-16142614
 ] 

Hadoop QA commented on HDFS-12358:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883830/HDFS-12358.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 01af2927f9fe 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b89ffcf |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-12191) Provide option to not capture the accessTime change of a file to snapshot if no other modification has been done

2017-08-25 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142610#comment-16142610
 ] 

Yongjun Zhang commented on HDFS-12191:
--

Thanks much for the review [~manojg].

Uploaded rev004 to address your comments. Good catch of the typo.


> Provide option to not capture the accessTime change of a file to snapshot if 
> no other modification has been done
> 
>
> Key: HDFS-12191
> URL: https://issues.apache.org/jira/browse/HDFS-12191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 3.0.0-beta1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12191.001.patch, HDFS-12191.002.patch, 
> HDFS-12191.003.patch, HDFS-12191.004.patch
>
>
> Currently, if the accessTime of a file changed before a snapshot is taken, 
> this accessTime will be captured in the snapshot, even if there is no other 
> modifications made to this file.
> Because of this, when we calculate snapshotDiff, more work need to be done 
> for this file, e,g,, metadataEquals method will be called, even if there is 
> no modification is made (thus not recorded to snapshotDiff). This can cause 
> snapshotDiff to slow down quite a lot when there are a lot of files to be 
> examined.
> This jira is to provide an option to skip capturing accessTime only change to 
> snapshot. Thus snapshotDiff can be done faster.
> When accessTime of a file changed, if there is other modification to the 
> file, the access time will still be captured in snapshot.
> Sometimes we want accessTime be captured to snapshot, such that when 
> restoring from the snapshot, we know the accessTime of this snapshot. So this 
> new feature is optional, and is controlled by a config property.
> Worth to mention is, how accurately the acessTime is captured is dependent on 
> the following config that has default value of 1 hour, which means new access 
> within an hour of previous access will not be captured.
> {code}
> public static final String  DFS_NAMENODE_ACCESSTIME_PRECISION_KEY =
>   
> HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMENODE_ACCESSTIME_PRECISION_KEY;
> public static final longDFS_NAMENODE_ACCESSTIME_PRECISION_DEFAULT = 
> 360;
> {code}
> .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12191) Provide option to not capture the accessTime change of a file to snapshot if no other modification has been done

2017-08-25 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-12191:
-
Attachment: HDFS-12191.004.patch

> Provide option to not capture the accessTime change of a file to snapshot if 
> no other modification has been done
> 
>
> Key: HDFS-12191
> URL: https://issues.apache.org/jira/browse/HDFS-12191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 3.0.0-beta1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12191.001.patch, HDFS-12191.002.patch, 
> HDFS-12191.003.patch, HDFS-12191.004.patch
>
>
> Currently, if the accessTime of a file changed before a snapshot is taken, 
> this accessTime will be captured in the snapshot, even if there is no other 
> modifications made to this file.
> Because of this, when we calculate snapshotDiff, more work need to be done 
> for this file, e,g,, metadataEquals method will be called, even if there is 
> no modification is made (thus not recorded to snapshotDiff). This can cause 
> snapshotDiff to slow down quite a lot when there are a lot of files to be 
> examined.
> This jira is to provide an option to skip capturing accessTime only change to 
> snapshot. Thus snapshotDiff can be done faster.
> When accessTime of a file changed, if there is other modification to the 
> file, the access time will still be captured in snapshot.
> Sometimes we want accessTime be captured to snapshot, such that when 
> restoring from the snapshot, we know the accessTime of this snapshot. So this 
> new feature is optional, and is controlled by a config property.
> Worth to mention is, how accurately the acessTime is captured is dependent on 
> the following config that has default value of 1 hour, which means new access 
> within an hour of previous access will not be captured.
> {code}
> public static final String  DFS_NAMENODE_ACCESSTIME_PRECISION_KEY =
>   
> HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMENODE_ACCESSTIME_PRECISION_KEY;
> public static final longDFS_NAMENODE_ACCESSTIME_PRECISION_DEFAULT = 
> 360;
> {code}
> .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7878) API - expose an unique file identifier

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142582#comment-16142582
 ] 

Hadoop QA commented on HDFS-7878:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 59s{color} | {color:orange} root: The patch generated 2 new + 382 unchanged 
- 0 fixed = 384 total (was 382) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
44s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-7878 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142568#comment-16142568
 ] 

Hadoop QA commented on HDFS-12358:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883830/HDFS-12358.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1e8540e2d31c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 36bada3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20875/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142563#comment-16142563
 ] 

Arpit Agarwal commented on HDFS-12358:
--

Triggered another Jenkins run. The failures are likely unrelated.

> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12358.001.patch, HDFS-12358.002.patch, 
> HDFS-12358.003.patch
>
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142553#comment-16142553
 ] 

Hadoop QA commented on HDFS-12356:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestEncryptionZonesWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestReconstructStripedFile |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12356 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883826/HDFS-12356.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bdf30e81f187 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 36bada3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20874/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit 

[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142532#comment-16142532
 ] 

Hadoop QA commented on HDFS-11912:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11912 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12299) Race Between update pipeline and DN Re-Registration

2017-08-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142523#comment-16142523
 ] 

Brahma Reddy Battula commented on HDFS-12299:
-

[~kihwal] thanks for review and commit.

> Race Between update pipeline and DN Re-Registration
> ---
>
> Key: HDFS-12299
> URL: https://issues.apache.org/jira/browse/HDFS-12299
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12299-branch-2-002.patch, 
> HDFS-12299-branch-2.patch, HDFS-12299.patch
>
>
>  *Scenario*   
>  - Started pipeline with DN1->DN2->DN3
>  - DN1 is re-reg and update pipeline is called
>  - Update pipeline will success with DN1->DN3->DN4
>  - Again update pipeline is called,which will fail with NPE.
> In step3 updatepipeline will set the storages as null since DN1 re-reg(which 
> will remove and add the storages)
> {{FSNamesystem#updatePipelineInternal}}
> {code}
>lastBlock.getUnderConstructionFeature().setExpectedLocations(lastBlock,
> storages, lastBlock.getBlockType())
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12319) DirectoryScanner will throw IllegalStateException when Multiple BP's are present

2017-08-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142521#comment-16142521
 ] 

Brahma Reddy Battula commented on HDFS-12319:
-

[~arpitagarwal] thanks for review and commit. [~vinayrpet] thanks for 
additional review.
bq.Committed this through to branch-2.8
Even you committed to {{branch-2.8.2}},looks you forgot to mention.

> DirectoryScanner will throw IllegalStateException when Multiple BP's are 
> present
> 
>
> Key: HDFS-12319
> URL: https://issues.apache.org/jira/browse/HDFS-12319
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12319-001.patch, HDFS-12319-002.patch, 
> TestCase_to_Reproduce.patch
>
>
> *Scenario:*
> Configure "*dfs.datanode.directoryscan.interval*" as *60* and start federated 
> cluster atleast with two nameservices.
> {noformat}
> 2017-08-18 19:06:37,150 
> [java.util.concurrent.ThreadPoolExecutor$Worker@37d68b4e[State = -1, empty 
> queue]] ERROR datanode.DirectoryScanner 
> (DirectoryScanner.java:getDiskReport(551)) - Error compiling report for the 
> volume, StorageId: DS-258b5e16-caa3-48c8-a0c8-b16934eb8a0c
> java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
> StopWatch is already running
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:542)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:392)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:373)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:318)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: StopWatch is already running
>   at org.apache.hadoop.util.StopWatch.start(StopWatch.java:49)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:612)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:579)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   ... 3 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11986) Dfsadmin should report erasure coding related information separately

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142519#comment-16142519
 ] 

Hudson commented on HDFS-11986:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12248 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12248/])
HDFS-11986. Dfsadmin should report erasure coding related information 
(manojpec: rev b89ffcff362a872013f5d96c1fb76e0731402db4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


> Dfsadmin should report erasure coding related information separately 
> -
>
> Key: HDFS-11986
> URL: https://issues.apache.org/jira/browse/HDFS-11986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11986.01.patch, HDFS-11986.02.patch
>
>
> dfsadmin -report command currently reports only the aggregated block stats 
> like below.
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> Pending deletion blocks: 0
> {noformat}
> Just like fsck, the proposal is to make dfsadmin command to report erasure 
> coding blockgroups stats separately, like below
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Replicated Blocks:
>   Under replicated blocks: 0
>   Blocks with corrupt replicas: 0
>   Missing blocks: 0
>   Missing blocks (with replication factor 1): 0
>   Pending deletion blocks: 0
> Erasure Coded Block Groups:
>   Under ec block groups: 0
>   EC block groups with corrupt internal blocks: 0
>   Missing ec block groups: 0
>   Pending deletion ec block groups: 0
> {noformat}
> Erasure coding specific details needed for this enhancements are already made 
> available as part of HDFS-10999.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142513#comment-16142513
 ] 

Hadoop QA commented on HDFS-12358:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestReencryptionHandler |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883820/HDFS-12358.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 913b221c0168 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f29a0fc |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-25 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142509#comment-16142509
 ] 

Manoj Govindassamy commented on HDFS-11912:
---

LGTM. +1 pending jenkins. Thanks [~ghuangups].

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch, HDFS-11912.005.patch, 
> HDFS-11912.006.patch, HDFS-11912.007.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11986) Dfsadmin should report erasure coding related information separately

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142503#comment-16142503
 ] 

Hadoop QA commented on HDFS-11986:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
| Timed out junit tests | 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11986 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883806/HDFS-11986.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e189961f1a21 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f29a0fc |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20871/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20871/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Updated] (HDFS-11986) Dfsadmin should report erasure coding related information separately

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11986:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

> Dfsadmin should report erasure coding related information separately 
> -
>
> Key: HDFS-11986
> URL: https://issues.apache.org/jira/browse/HDFS-11986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11986.01.patch, HDFS-11986.02.patch
>
>
> dfsadmin -report command currently reports only the aggregated block stats 
> like below.
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> Pending deletion blocks: 0
> {noformat}
> Just like fsck, the proposal is to make dfsadmin command to report erasure 
> coding blockgroups stats separately, like below
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Replicated Blocks:
>   Under replicated blocks: 0
>   Blocks with corrupt replicas: 0
>   Missing blocks: 0
>   Missing blocks (with replication factor 1): 0
>   Pending deletion blocks: 0
> Erasure Coded Block Groups:
>   Under ec block groups: 0
>   EC block groups with corrupt internal blocks: 0
>   Missing ec block groups: 0
>   Pending deletion ec block groups: 0
> {noformat}
> Erasure coding specific details needed for this enhancements are already made 
> available as part of HDFS-10999.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11986) Dfsadmin should report erasure coding related information separately

2017-08-25 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142488#comment-16142488
 ] 

Manoj Govindassamy commented on HDFS-11986:
---

Thanks for the review [~eddyxu].
Committed to trunk.

> Dfsadmin should report erasure coding related information separately 
> -
>
> Key: HDFS-11986
> URL: https://issues.apache.org/jira/browse/HDFS-11986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11986.01.patch, HDFS-11986.02.patch
>
>
> dfsadmin -report command currently reports only the aggregated block stats 
> like below.
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> Pending deletion blocks: 0
> {noformat}
> Just like fsck, the proposal is to make dfsadmin command to report erasure 
> coding blockgroups stats separately, like below
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Replicated Blocks:
>   Under replicated blocks: 0
>   Blocks with corrupt replicas: 0
>   Missing blocks: 0
>   Missing blocks (with replication factor 1): 0
>   Pending deletion blocks: 0
> Erasure Coded Block Groups:
>   Under ec block groups: 0
>   EC block groups with corrupt internal blocks: 0
>   Missing ec block groups: 0
>   Pending deletion ec block groups: 0
> {noformat}
> Erasure coding specific details needed for this enhancements are already made 
> available as part of HDFS-10999.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142486#comment-16142486
 ] 

Hadoop QA commented on HDFS-12356:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12356 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883819/HDFS-12356.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9496235acf3f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f29a0fc |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20870/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20870/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20870/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-11986) Dfsadmin should report erasure coding related information separately

2017-08-25 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142484#comment-16142484
 ] 

Manoj Govindassamy commented on HDFS-11986:
---

Test failures are not related to the patch. Except for 
TestLeaseRecoveryStriped, other tests are passing locally.

> Dfsadmin should report erasure coding related information separately 
> -
>
> Key: HDFS-11986
> URL: https://issues.apache.org/jira/browse/HDFS-11986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11986.01.patch, HDFS-11986.02.patch
>
>
> dfsadmin -report command currently reports only the aggregated block stats 
> like below.
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> Pending deletion blocks: 0
> {noformat}
> Just like fsck, the proposal is to make dfsadmin command to report erasure 
> coding blockgroups stats separately, like below
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Replicated Blocks:
>   Under replicated blocks: 0
>   Blocks with corrupt replicas: 0
>   Missing blocks: 0
>   Missing blocks (with replication factor 1): 0
>   Pending deletion blocks: 0
> Erasure Coded Block Groups:
>   Under ec block groups: 0
>   EC block groups with corrupt internal blocks: 0
>   Missing ec block groups: 0
>   Pending deletion ec block groups: 0
> {noformat}
> Erasure coding specific details needed for this enhancements are already made 
> available as part of HDFS-10999.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12359) Re-encryption should operate with minimum KMS ACL requirements.

2017-08-25 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-12359:


 Summary: Re-encryption should operate with minimum KMS ACL 
requirements.
 Key: HDFS-12359
 URL: https://issues.apache.org/jira/browse/HDFS-12359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Affects Versions: 3.0.0-beta1
Reporter: Xiao Chen
Assignee: Xiao Chen


This was caught from KMS acl testing.

HDFS-10899 gets the current key versions from KMS directly, which requires 
{{READ}} acls.
It also calls invalidateCache, which requires {{MANAGEMENT}} acls.

We should fix re-encryption to not require additional ACLs than original 
encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12215) DataNode#transferBlock does not create its daemon in the xceiver thread group

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142456#comment-16142456
 ] 

Hudson commented on HDFS-12215:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12247 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12247/])
HDFS-12215. DataNode.transferBlock does not create its daemon in the (lei: rev 
36bada3032e438099ada9d865c3945d42c3e7c2a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DataNode#transferBlock does not create its daemon in the xceiver thread group
> -
>
> Key: HDFS-12215
> URL: https://issues.apache.org/jira/browse/HDFS-12215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12215.00.patch
>
>
> As mentioned in HDFS-12044, DataNode#transferBlock daemon is not calculated 
> to xceiver count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142447#comment-16142447
 ] 

Arpit Agarwal commented on HDFS-12358:
--

+1 for the v3 patch, pending Jenkins.

> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12358.001.patch, HDFS-12358.002.patch, 
> HDFS-12358.003.patch
>
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7878) API - expose an unique file identifier

2017-08-25 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-7878:

Attachment: HDFS-7878.08.patch

> API - expose an unique file identifier
> --
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12332) Logging improvement suggestion for SampleStat function MinMax.add

2017-08-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12332:
---
Target Version/s: 3.0.0-beta1  (was: 3.0.0-alpha4)

> Logging improvement suggestion for SampleStat function MinMax.add
> -
>
> Key: HDFS-12332
> URL: https://issues.apache.org/jira/browse/HDFS-12332
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha4
>Reporter: Xu Zhao
>Priority: Minor
> Attachments: hdfs-12332.patch
>
>
> In order to debug performance anomlies, it is better to see when the max and 
> min value get updated in metrics.
> For example, in our workload we would like to see the longest latency of a 
> write block request on DataNode.
> The following metric updating code could possibly update the min/max value in 
> SampleStat class:
> {code:java}
> datanode.getMetrics().addWriteBlockOp(elapsed());
> datanode.getMetrics().incrWritesFromClient(peer.isLocal(), size);
> {code}
> Here I am attaching an attempt to patch it.
> The patch just adds additional DEBUG level logging in the MinMax class and 
> prints the new value when it gets updated.
> Please let me know what you think. For example, if this patch would introduce 
> too much noises, what is the best way to trace the read/writeBlock that has 
> the minimum/maximum latency?
> Any comments are appreciated!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12358:
--
Attachment: HDFS-12358.003.patch

Thanks for the catch Arpit.
Fixed it in patch v03.

> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12358.001.patch, HDFS-12358.002.patch, 
> HDFS-12358.003.patch
>
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142436#comment-16142436
 ] 

Arpit Agarwal commented on HDFS-12358:
--

One more comment, sorry I missed it last time. The delete attempt and following 
message should only be done on failure:
{code}
   if (!tmpEditsFile.delete()) {
 LOG.warn("Deleting " + tmpEditsFile + " has failed");
{code}

> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12358.001.patch, HDFS-12358.002.patch
>
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema

2017-08-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142418#comment-16142418
 ] 

Andrew Wang commented on HDFS-7337:
---

Hey folks, are we on track for completing this by beta1 in ~three weeks? We 
want metadata and API changes to be complete by then.

> Configurable and pluggable Erasure Codec and schema
> ---
>
> Key: HDFS-7337
> URL: https://issues.apache.org/jira/browse/HDFS-7337
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Reporter: Zhe Zhang
>Priority: Critical
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-7337-prototype-v1.patch, 
> HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, 
> PluggableErasureCodec.pdf, PluggableErasureCodec-v2.pdf, 
> PluggableErasureCodec-v3.pdf, PluggableErasureCodec v4.pdf
>
>
> According to HDFS-7285 and the design, this considers to support multiple 
> Erasure Codecs via pluggable approach. It allows to define and configure 
> multiple codec schemas with different coding algorithms and parameters. The 
> resultant codec schemas can be utilized and specified via command tool for 
> different file folders. While design and implement such pluggable framework, 
> it’s also to implement a concrete codec by default (Reed Solomon) to prove 
> the framework is useful and workable. Separate JIRA could be opened for the 
> RS codec implementation.
> Note HDFS-7353 will focus on the very low level codec API and implementation 
> to make concrete vendor libraries transparent to the upper layer. This JIRA 
> focuses on high level stuffs that interact with configuration, schema and etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142416#comment-16142416
 ] 

Arpit Agarwal commented on HDFS-12356:
--

+1 for the v2 patch, pending Jenkins.

> Unit test for JN sync during Rolling Upgrade
> 
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12356.001.patch, HDFS-12356.002.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12358:
--
Attachment: HDFS-12358.002.patch

Thanks for the review [~arpitagarwal]. 
Addressed your comments in patch v02.

> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12358.001.patch, HDFS-12358.002.patch
>
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-25 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12356:
--
Attachment: HDFS-12356.002.patch

Thanks for the review [~arpitagarwal].
Posted patch v02 with an improvement. _deleteEditLogsFromRandomJN()_ would 
delete a random edit log instead of the last one.

> Unit test for JN sync during Rolling Upgrade
> 
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12356.001.patch, HDFS-12356.002.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11986) Dfsadmin should report erasure coding related information separately

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142404#comment-16142404
 ] 

Hadoop QA commented on HDFS-11986:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11986 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883806/HDFS-11986.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7e0233416995 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f29a0fc |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20869/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20869/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20869/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-25 Thread George Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142397#comment-16142397
 ] 

George Huang commented on HDFS-11912:
-

Thanks Manoj for the review. Change was made and patch uploaded.

Many thanks,
George

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch, HDFS-11912.005.patch, 
> HDFS-11912.006.patch, HDFS-11912.007.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-25 Thread George Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HDFS-11912:

Attachment: HDFS-11912.007.patch

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch, HDFS-11912.005.patch, 
> HDFS-11912.006.patch, HDFS-11912.007.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12215) DataNode#transferBlock does not create its daemon in the xceiver thread group

2017-08-25 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12215:
-
Fix Version/s: 2.9.0

> DataNode#transferBlock does not create its daemon in the xceiver thread group
> -
>
> Key: HDFS-12215
> URL: https://issues.apache.org/jira/browse/HDFS-12215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12215.00.patch
>
>
> As mentioned in HDFS-12044, DataNode#transferBlock daemon is not calculated 
> to xceiver count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12215) DataNode#transferBlock does not create its daemon in the xceiver thread group

2017-08-25 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12215:
-
  Resolution: Fixed
   Fix Version/s: 3.0.0-beta1
Target Version/s: 2.9.0, 3.0.0-beta1  (was: 3.0.0-beta1)
  Status: Resolved  (was: Patch Available)

Thanks a lot for the reviews, [~hanishakoneru] and [~andrew.wang]

The test failures are not relevant. Passed locally on my laptop.

Committed to trunk and branch-2

> DataNode#transferBlock does not create its daemon in the xceiver thread group
> -
>
> Key: HDFS-12215
> URL: https://issues.apache.org/jira/browse/HDFS-12215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12215.00.patch
>
>
> As mentioned in HDFS-12044, DataNode#transferBlock daemon is not calculated 
> to xceiver count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-25 Thread George Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HDFS-11912:

Attachment: (was: HDFS-11912.007.patch)

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch, HDFS-11912.005.patch, 
> HDFS-11912.006.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-25 Thread George Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HDFS-11912:

Attachment: HDFS-11912.007.patch

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch, HDFS-11912.005.patch, 
> HDFS-11912.006.patch, HDFS-11912.007.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-25 Thread George Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HDFS-11912:

Attachment: (was: HDFS-11912.007.patch)

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch, HDFS-11912.005.patch, 
> HDFS-11912.006.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-08-25 Thread George Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HDFS-11912:

Attachment: HDFS-11912.007.patch

> Add a snapshot unit test with randomized file IO operations
> ---
>
> Key: HDFS-11912
> URL: https://issues.apache.org/jira/browse/HDFS-11912
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>  Labels: TestGap
> Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch, 
> HDFS-11912.003.patch, HDFS-11912.004.patch, HDFS-11912.005.patch, 
> HDFS-11912.006.patch, HDFS-11912.007.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142373#comment-16142373
 ] 

Arpit Agarwal commented on HDFS-12358:
--

Thanks for this improvement [~hanishakoneru]. Minor comments:
# Initialize moveSuccess to false at declaration:
{code}
+boolean moveSuccess;
{code}
# You can delete this line from the catch block after fixing 1.
{code}
+  moveSuccess = false;
{code}
# The log statement can be at INFO level so we see the failed move in the log 
file by default.
{code}
+  LOG.debug("Could not move %s to current directory.", tmpEditsFile);
{code}

+1 otherwise.

> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12358.001.patch
>
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12358:
--
Status: Patch Available  (was: Open)

> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12358.001.patch
>
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12358:
--
Attachment: HDFS-12358.001.patch

> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12358.001.patch
>
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142363#comment-16142363
 ] 

Arpit Agarwal commented on HDFS-12356:
--

+1 pending Jenkins.

Thanks [~hanishakoneru].

> Unit test for JN sync during Rolling Upgrade
> 
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12356.001.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12215) DataNode#transferBlock does not create its daemon in the xceiver thread group

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142356#comment-16142356
 ] 

Hadoop QA commented on HDFS-12215:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestMissingBlocksAlert |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12215 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883799/HDFS-12215.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0a8cf6786394 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e864f81 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20867/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20867/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20867/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12358:
--
Description: 
During JN sync, a missing edit log is first downloaded from a remote JN to a 
tmp dir and then moved to the current directory (protected by the Journal's 
synchronization). 
Irrespective of whether the move succeeds or fails, we should delete the tmp 
file.


  was:During JN sync, a missing edit log is first downloaded from a remote JN 
to a tmp dir and then moved to the current directory (protected by the 
Journal's synchronization). If this move fails, we need to catch it and delete 
the downloaded edit log from the tmp dir.


> Catch IOException when transferring edit log to Journal current dir through 
> JN sync
> ---
>
> Key: HDFS-12358
> URL: https://issues.apache.org/jira/browse/HDFS-12358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>
> During JN sync, a missing edit log is first downloaded from a remote JN to a 
> tmp dir and then moved to the current directory (protected by the Journal's 
> synchronization). 
> Irrespective of whether the move succeeds or fails, we should delete the tmp 
> file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12358) Catch IOException when transferring edit log to Journal current dir through JN sync

2017-08-25 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-12358:
-

 Summary: Catch IOException when transferring edit log to Journal 
current dir through JN sync
 Key: HDFS-12358
 URL: https://issues.apache.org/jira/browse/HDFS-12358
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


During JN sync, a missing edit log is first downloaded from a remote JN to a 
tmp dir and then moved to the current directory (protected by the Journal's 
synchronization). If this move fails, we need to catch it and delete the 
downloaded edit log from the tmp dir.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-25 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12356:
--
Attachment: HDFS-12356.001.patch

> Unit test for JN sync during Rolling Upgrade
> 
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12356.001.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-25 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12356:
--
Status: Patch Available  (was: Open)

> Unit test for JN sync during Rolling Upgrade
> 
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12356.001.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12350) Support meta tags in configs

2017-08-25 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12350:
--
Description: 
We should add meta tag extension to the hadoop/hdfs config so that we can 
retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
we don't have an option available to group or list properties related to 
performance or security or datanodes. Grouping properties through some 
restricted set of Meta tags and then exposing them in Configuration class will 
be useful for end users.
For example, here is an config with meta tag.

{code}

   
  dfs.namenode.servicerpc-bind-host
  localhost
   REQUIRED 
   
   
  
  dfs.namenode.fs-limits.min-block-size
   1048576 
   PERFORMANCE
   

 
  dfs.namenode.logging.level
  Info
   TRACE, DEBUG 
   
  

{code}

  was:
Proposal is to add domain tag extension to the config so that ozone can use 
that in its configs.
For example, here is an Ozone config with domain tag.

{code}

   
  ozone.enabled
  True
   REQUIRED 
   
  
  dfs.cblock.trace.io
  False
   TRACE, DEBUG 
   
{code}


> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>
> We should add meta tag extension to the hadoop/hdfs config so that we can 
> retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
> we don't have an option available to group or list properties related to 
> performance or security or datanodes. Grouping properties through some 
> restricted set of Meta tags and then exposing them in Configuration class 
> will be useful for end users.
> For example, here is an config with meta tag.
> {code}
> 
>
>   dfs.namenode.servicerpc-bind-host
>   localhost
>REQUIRED 
>
>
>   
>   dfs.namenode.fs-limits.min-block-size
>1048576 
>PERFORMANCE
>
>  
>   dfs.namenode.logging.level
>   Info
>TRACE, DEBUG 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142284#comment-16142284
 ] 

Hudson commented on HDFS-12303:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12246 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12246/])
HDFS-12303. Change default EC cell size to 1MB for better performance. (wang: 
rev f29a0fc288a625522ba910e61b63fd5f10418b3d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestErasureCodingCLI.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/test_ec_policies.xml


> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12303.00.patch, HDFS-12303.01.patch, 
> HDFS-12303.02.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11834) Ozone: Fix TestArchive#testArchive

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142262#comment-16142262
 ] 

Hadoop QA commented on HDFS-11834:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11834 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883803/HDFS-11834-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5896ed22c630 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 1586f20 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20868/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20868/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Fix TestArchive#testArchive
> --
>
> Key: HDFS-11834
> URL: https://issues.apache.org/jira/browse/HDFS-11834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
> Attachments: HDFS-11834-HDFS-7240.001.patch
>
>
> This Alder32 based CRC check does not mismatch on MAC but does on some 
> Jenkins machines based on some recent Jenkins run:
> {code}
> org.apache.hadoop.scm.TestArchive.testArchive
> Failing for 

[jira] [Resolved] (HDFS-10825) Snapshot read can reveal future bytes if snapshotted while writing

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy resolved HDFS-10825.
---
   Resolution: Duplicate
Fix Version/s: 3.0.0-beta1
   2.9.0

HDFS-11402 solves the core issue of making HDFS snapshots immutable w.r.t open 
files. Tested with the patch attached in this jira and with the fix it passes 
through. Closing this jira as a duplicate. Please let me know if you think 
otherwise.

> Snapshot read can reveal future bytes if snapshotted while writing
> --
>
> Key: HDFS-10825
> URL: https://issues.apache.org/jira/browse/HDFS-10825
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2
> Environment: HDFS-2.7.2, see attached unittest file.
>Reporter: Abhishek Rai
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: TestSnapshotFileBeingWritten.java
>
>
> The following sequence of steps will produce extra bytes, that should not be 
> visible, because they are not in the snapshot.
> - Create a new file for writing.
> - Write "hello world"
> - Invoke hsync() on the file handle.
> - Create a snapshot, keep the file open.
> - Append another "hello world" string to the same file handle.
> - Close the file.
> - Read file in the snapshot (not the current file).
> - Output is "hello worldhello world" instead of the expected snapshot 
> contents of "hello world".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11986) Dfsadmin should report erasure coding related information separately

2017-08-25 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142259#comment-16142259
 ] 

Lei (Eddy) Xu commented on HDFS-11986:
--

LGTM. +1 pending Jenkins. 

Thanks  Manoj.

> Dfsadmin should report erasure coding related information separately 
> -
>
> Key: HDFS-11986
> URL: https://issues.apache.org/jira/browse/HDFS-11986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11986.01.patch, HDFS-11986.02.patch
>
>
> dfsadmin -report command currently reports only the aggregated block stats 
> like below.
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> Pending deletion blocks: 0
> {noformat}
> Just like fsck, the proposal is to make dfsadmin command to report erasure 
> coding blockgroups stats separately, like below
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Replicated Blocks:
>   Under replicated blocks: 0
>   Blocks with corrupt replicas: 0
>   Missing blocks: 0
>   Missing blocks (with replication factor 1): 0
>   Pending deletion blocks: 0
> Erasure Coded Block Groups:
>   Under ec block groups: 0
>   EC block groups with corrupt internal blocks: 0
>   Missing ec block groups: 0
>   Pending deletion ec block groups: 0
> {noformat}
> Erasure coding specific details needed for this enhancements are already made 
> available as part of HDFS-10999.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-10825) Snapshot read can reveal future bytes if snapshotted while writing

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10825 started by Manoj Govindassamy.
-
> Snapshot read can reveal future bytes if snapshotted while writing
> --
>
> Key: HDFS-10825
> URL: https://issues.apache.org/jira/browse/HDFS-10825
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2
> Environment: HDFS-2.7.2, see attached unittest file.
>Reporter: Abhishek Rai
>Assignee: Manoj Govindassamy
> Attachments: TestSnapshotFileBeingWritten.java
>
>
> The following sequence of steps will produce extra bytes, that should not be 
> visible, because they are not in the snapshot.
> - Create a new file for writing.
> - Write "hello world"
> - Invoke hsync() on the file handle.
> - Create a snapshot, keep the file open.
> - Append another "hello world" string to the same file handle.
> - Close the file.
> - Read file in the snapshot (not the current file).
> - Output is "hello worldhello world" instead of the expected snapshot 
> contents of "hello world".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12349) Improve log message when it could not alloc enough blocks for EC

2017-08-25 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142243#comment-16142243
 ] 

Lei (Eddy) Xu commented on HDFS-12349:
--

Many of the failures are due to error message change.  Working on the fix.

> Improve log message when it could not alloc enough blocks for EC 
> -
>
> Key: HDFS-12349
> URL: https://issues.apache.org/jira/browse/HDFS-12349
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-12349.00.patch
>
>
> When an EC output stream could not alloc enough blocks for parity blocks, it 
> sets the warning.
> {code}
> if (blocks[i] == null) {
> LOG.warn("Failed to get block location for parity block, index=" + i);
> {code}
> We should clarify the cause of this warning message.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2017-08-25 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142232#comment-16142232
 ] 

Rushabh S Shah commented on HDFS-11885:
---

bq. Rushabh, we'd still appreciate your patch if you have it.
I apologize for not replying earlier.
I will create a new jira as soon as I have some cycles to upmerge internal 
working 2.8 patch with trunk and writing new test cases.

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2017-08-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11885:
---
Priority: Major  (was: Critical)

I'm dropping this one in priority. Rushabh, we'd still appreciate your patch if 
you have it.

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11986) Dfsadmin should report erasure coding related information separately

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11986:
--
Attachment: HDFS-11986.02.patch

Attaching patch version 02 with latest trunk rebase. [~eddyxu], can you please 
take a look?

> Dfsadmin should report erasure coding related information separately 
> -
>
> Key: HDFS-11986
> URL: https://issues.apache.org/jira/browse/HDFS-11986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11986.01.patch, HDFS-11986.02.patch
>
>
> dfsadmin -report command currently reports only the aggregated block stats 
> like below.
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> Pending deletion blocks: 0
> {noformat}
> Just like fsck, the proposal is to make dfsadmin command to report erasure 
> coding blockgroups stats separately, like below
> {noformat}
> # hdfs dfsadmin -report
> Configured Capacity: 1498775814144 (1.36 TB)
> Present Capacity: 931852427264 (867.86 GB)
> DFS Remaining: 931805765632 (867.81 GB)
> DFS Used: 46661632 (44.50 MB)
> DFS Used%: 0.01%
> Replicated Blocks:
>   Under replicated blocks: 0
>   Blocks with corrupt replicas: 0
>   Missing blocks: 0
>   Missing blocks (with replication factor 1): 0
>   Pending deletion blocks: 0
> Erasure Coded Block Groups:
>   Under ec block groups: 0
>   EC block groups with corrupt internal blocks: 0
>   Missing ec block groups: 0
>   Pending deletion ec block groups: 0
> {noformat}
> Erasure coding specific details needed for this enhancements are already made 
> available as part of HDFS-10999.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12303:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Wei for the nice contribution and Kai for reviewing!

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12303.00.patch, HDFS-12303.01.patch, 
> HDFS-12303.02.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12303) Change default EC cell size to 1MB for better performance

2017-08-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142212#comment-16142212
 ] 

Andrew Wang commented on HDFS-12303:


+1 LGTM, will check this in.

> Change default EC cell size to 1MB for better performance
> -
>
> Key: HDFS-12303
> URL: https://issues.apache.org/jira/browse/HDFS-12303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12303.00.patch, HDFS-12303.01.patch, 
> HDFS-12303.02.patch
>
>
> As discussed in HDFS-11814, 1MB cell size shows better performance than 
> others during the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12349) Improve log message when it could not alloc enough blocks for EC

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142208#comment-16142208
 ] 

Hadoop QA commented on HDFS-12349:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
162 unchanged - 0 fixed = 164 total (was 162) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|  

[jira] [Updated] (HDFS-11834) Ozone: Fix TestArchive#testArchive

2017-08-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11834:
--
Status: Patch Available  (was: Open)

> Ozone: Fix TestArchive#testArchive
> --
>
> Key: HDFS-11834
> URL: https://issues.apache.org/jira/browse/HDFS-11834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
> Attachments: HDFS-11834-HDFS-7240.001.patch
>
>
> This Alder32 based CRC check does not mismatch on MAC but does on some 
> Jenkins machines based on some recent Jenkins run:
> {code}
> org.apache.hadoop.scm.TestArchive.testArchive
> Failing for the past 1 build (Since Failed#19352 )
> Took 21 sec.
> Error Message
> expected:<3488429799> but was:<2161587943>
> Stacktrace
> java.lang.AssertionError: expected:<3488429799> but was:<2161587943>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at org.apache.hadoop.scm.TestArchive.testArchive(TestArchive.java:104)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11834) Ozone: Fix TestArchive#testArchive

2017-08-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11834:
--
Attachment: HDFS-11834-HDFS-7240.001.patch

Attach a patch that fix the test under Linux.

> Ozone: Fix TestArchive#testArchive
> --
>
> Key: HDFS-11834
> URL: https://issues.apache.org/jira/browse/HDFS-11834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
> Attachments: HDFS-11834-HDFS-7240.001.patch
>
>
> This Alder32 based CRC check does not mismatch on MAC but does on some 
> Jenkins machines based on some recent Jenkins run:
> {code}
> org.apache.hadoop.scm.TestArchive.testArchive
> Failing for the past 1 build (Since Failed#19352 )
> Took 21 sec.
> Error Message
> expected:<3488429799> but was:<2161587943>
> Stacktrace
> java.lang.AssertionError: expected:<3488429799> but was:<2161587943>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at org.apache.hadoop.scm.TestArchive.testArchive(TestArchive.java:104)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12215) DataNode#transferBlock does not create its daemon in the xceiver thread group

2017-08-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142197#comment-16142197
 ] 

Andrew Wang commented on HDFS-12215:


+1 LGTM pending Jenkins, thanks Eddy!

> DataNode#transferBlock does not create its daemon in the xceiver thread group
> -
>
> Key: HDFS-12215
> URL: https://issues.apache.org/jira/browse/HDFS-12215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12215.00.patch
>
>
> As mentioned in HDFS-12044, DataNode#transferBlock daemon is not calculated 
> to xceiver count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11220) SnapshotDiffReport should detect open files in HDFS Snapshots

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11220:
--
Fix Version/s: 2.9.0

> SnapshotDiffReport should detect open files in HDFS Snapshots
> -
>
> Key: HDFS-11220
> URL: https://issues.apache.org/jira/browse/HDFS-11220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-beta1
>
>
> *Problem:*
> 1. When there are files being written and when HDFS Snapshots are taken in 
> parallel, Snapshots do capture all these files, but these being written files 
> in Snapshots do not have the point-in-time file length captured. Most of the 
> times, these open files will have a length of 0, or the last block boundary 
> size.
> 2. Only at the time of File close or any other meta data modification 
> operation on these files, HDFS reconciles the file length and records the 
> modification in the last taken Snapshot. All the previously taken Snapshots 
> continue to have those open Files with no modification recorded. So, all 
> those previous snapshots end up using the final modification record in the 
> next available snapshot. So, after the file close, file lengths in all those 
> snapshots will end up same.
> Assume File1 is opened for write and a total of 1MB written to it. While the 
> writes are happening, snapshots are taken in parallel.
> {noformat}
> |---Time---T1---T2-T3T4-->
> |---Snap1--Snap2-Snap3--->
> |---File1.open---write-write---close->
> {noformat}
> Then at time,
> T2:
> Snap1.File1.length = 0
> T3:
> Snap1.File1.length = 0
> Snap2.File1.length = 0
> 
> T4:
> Snap1.File1.length = 1MB
> Snap2.File1.length = 1MB
> Snap3.File1.length = 1MB
> So, Snapshot Diff Report running against any of above snapshots will not 
> detect any delta changes in the open files. 
> *Proposal:*
> 1. HDFS Snapshots can stash open file details in the snapshot record. 
> 2. NameNode might not have the accurate byte level length visibility on the 
> open files, Snapshots might not have the accurate point-in-time length 
> captured. So, SnapshotDiffReport can have an option to detect open files and 
> always show {{M}} flag for the open files, if the files are available on both 
> the snapshots it is running against with. 
> {noformat}
> hdfs snapshotDiff -includeOpenFiles   
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11720) LeaseManager#getINodeWithLeases() should support skipping leases of deleted files with snapshot feature

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11720:
--
Fix Version/s: 2.9.0

> LeaseManager#getINodeWithLeases() should support skipping leases of deleted 
> files with snapshot feature
> ---
>
> Key: HDFS-11720
> URL: https://issues.apache.org/jira/browse/HDFS-11720
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-beta1
>
>
> {{LeaseManager#getINodeWithLeases()}} currently returns a set of INodesInPath 
> for all the leases currently in the system. But, these leases could also 
> belong to a file with snapshot feature, which just got deleted and not yet 
> purged. Better if we can have version of 
> {{LeaseManager#getINodeWithLeases()}} which returns IIP set only for 
> non-deleted files so that some of its users like createSnapshot which wants 
> to look at open files only don't get tripped on the deleted files.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11218) Add option to skip open files during HDFS Snapshots

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11218:
--
Fix Version/s: 2.9.0

> Add option to skip open files during HDFS Snapshots
> ---
>
> Key: HDFS-11218
> URL: https://issues.apache.org/jira/browse/HDFS-11218
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-beta1
>
>
> *Problem:* 
> When there are files being written and when HDFS Snapshots are taken in 
> parallel,  Snapshots do capture all these files, but these being written 
> files in Snapshots do not have the point-in-time file length captured.
> At the time of File close or any other meta data modification operation on 
> that file which was previously open, HDFS reconciles the file length and 
> records the modification in the last taken Snapshot. All the previously taken 
> Snapshots continue to have the same open File with no modification recorded. 
> So, all those previous snapshots end up using the final modification record 
> in the next available snapshot.
> *Proposal:*
> HDFS Snapshot Design goal was to have O(M) space usage for Snapshots, where M 
> is the number file modifications. So, it would very expensive to record 
> modifications for all the open files in all the snapshots. For applications 
> that do not want to capture incomplete / partial being written binary files 
> in the snapshots, it would be preferable to have an extra option to skip open 
> files. This way, they don't have to worry about restoring inconsistent files 
> from the snapshots. 
> {noformat}
> hdfs dfs -createSnapshot -skipOpenFiles  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11218) Add option to skip open files during HDFS Snapshots

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11218:
--
Fix Version/s: 3.0.0-beta1

> Add option to skip open files during HDFS Snapshots
> ---
>
> Key: HDFS-11218
> URL: https://issues.apache.org/jira/browse/HDFS-11218
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-beta1
>
>
> *Problem:* 
> When there are files being written and when HDFS Snapshots are taken in 
> parallel,  Snapshots do capture all these files, but these being written 
> files in Snapshots do not have the point-in-time file length captured.
> At the time of File close or any other meta data modification operation on 
> that file which was previously open, HDFS reconciles the file length and 
> records the modification in the last taken Snapshot. All the previously taken 
> Snapshots continue to have the same open File with no modification recorded. 
> So, all those previous snapshots end up using the final modification record 
> in the next available snapshot.
> *Proposal:*
> HDFS Snapshot Design goal was to have O(M) space usage for Snapshots, where M 
> is the number file modifications. So, it would very expensive to record 
> modifications for all the open files in all the snapshots. For applications 
> that do not want to capture incomplete / partial being written binary files 
> in the snapshots, it would be preferable to have an extra option to skip open 
> files. This way, they don't have to worry about restoring inconsistent files 
> from the snapshots. 
> {noformat}
> hdfs dfs -createSnapshot -skipOpenFiles  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11220) SnapshotDiffReport should detect open files in HDFS Snapshots

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy resolved HDFS-11220.
---
   Resolution: Workaround
Fix Version/s: 3.0.0-beta1

The core issue described in the jira is not a problem any more. With the fix 
for HDFS-11402, we have a workaround to capture immutable copies of open files 
in the snapshots. 

> SnapshotDiffReport should detect open files in HDFS Snapshots
> -
>
> Key: HDFS-11220
> URL: https://issues.apache.org/jira/browse/HDFS-11220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-beta1
>
>
> *Problem:*
> 1. When there are files being written and when HDFS Snapshots are taken in 
> parallel, Snapshots do capture all these files, but these being written files 
> in Snapshots do not have the point-in-time file length captured. Most of the 
> times, these open files will have a length of 0, or the last block boundary 
> size.
> 2. Only at the time of File close or any other meta data modification 
> operation on these files, HDFS reconciles the file length and records the 
> modification in the last taken Snapshot. All the previously taken Snapshots 
> continue to have those open Files with no modification recorded. So, all 
> those previous snapshots end up using the final modification record in the 
> next available snapshot. So, after the file close, file lengths in all those 
> snapshots will end up same.
> Assume File1 is opened for write and a total of 1MB written to it. While the 
> writes are happening, snapshots are taken in parallel.
> {noformat}
> |---Time---T1---T2-T3T4-->
> |---Snap1--Snap2-Snap3--->
> |---File1.open---write-write---close->
> {noformat}
> Then at time,
> T2:
> Snap1.File1.length = 0
> T3:
> Snap1.File1.length = 0
> Snap2.File1.length = 0
> 
> T4:
> Snap1.File1.length = 1MB
> Snap2.File1.length = 1MB
> Snap3.File1.length = 1MB
> So, Snapshot Diff Report running against any of above snapshots will not 
> detect any delta changes in the open files. 
> *Proposal:*
> 1. HDFS Snapshots can stash open file details in the snapshot record. 
> 2. NameNode might not have the accurate byte level length visibility on the 
> open files, Snapshots might not have the accurate point-in-time length 
> captured. So, SnapshotDiffReport can have an option to detect open files and 
> always show {{M}} flag for the open files, if the files are available on both 
> the snapshots it is running against with. 
> {noformat}
> hdfs snapshotDiff -includeOpenFiles   
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11218) Add option to skip open files during HDFS Snapshots

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy resolved HDFS-11218.
---
Resolution: Workaround

The core issue described in the jira is not a problem any more. With the fix 
for HDFS-11402, we have a workaround to capture immutable copies of open files 
in the snapshots. 

> Add option to skip open files during HDFS Snapshots
> ---
>
> Key: HDFS-11218
> URL: https://issues.apache.org/jira/browse/HDFS-11218
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>
> *Problem:* 
> When there are files being written and when HDFS Snapshots are taken in 
> parallel,  Snapshots do capture all these files, but these being written 
> files in Snapshots do not have the point-in-time file length captured.
> At the time of File close or any other meta data modification operation on 
> that file which was previously open, HDFS reconciles the file length and 
> records the modification in the last taken Snapshot. All the previously taken 
> Snapshots continue to have the same open File with no modification recorded. 
> So, all those previous snapshots end up using the final modification record 
> in the next available snapshot.
> *Proposal:*
> HDFS Snapshot Design goal was to have O(M) space usage for Snapshots, where M 
> is the number file modifications. So, it would very expensive to record 
> modifications for all the open files in all the snapshots. For applications 
> that do not want to capture incomplete / partial being written binary files 
> in the snapshots, it would be preferable to have an extra option to skip open 
> files. This way, they don't have to worry about restoring inconsistent files 
> from the snapshots. 
> {noformat}
> hdfs dfs -createSnapshot -skipOpenFiles  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12215) DataNode#transferBlock does not create its daemon in the xceiver thread group

2017-08-25 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142176#comment-16142176
 ] 

Hanisha Koneru commented on HDFS-12215:
---

Thanks for the fix, [~eddyxu]. 
The patch LGTM. +1 (non-binding).

> DataNode#transferBlock does not create its daemon in the xceiver thread group
> -
>
> Key: HDFS-12215
> URL: https://issues.apache.org/jira/browse/HDFS-12215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12215.00.patch
>
>
> As mentioned in HDFS-12044, DataNode#transferBlock daemon is not calculated 
> to xceiver count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11720) LeaseManager#getINodeWithLeases() should support skipping leases of deleted files with snapshot feature

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy resolved HDFS-11720.
---
   Resolution: Duplicate
Fix Version/s: 3.0.0-beta1

HDFS-12217 solved the issue described here - 
{{LeaseManager#getINodesWithLease()}} should skip deleted files. Closing this 
jira.

> LeaseManager#getINodeWithLeases() should support skipping leases of deleted 
> files with snapshot feature
> ---
>
> Key: HDFS-11720
> URL: https://issues.apache.org/jira/browse/HDFS-11720
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-beta1
>
>
> {{LeaseManager#getINodeWithLeases()}} currently returns a set of INodesInPath 
> for all the leases currently in the system. But, these leases could also 
> belong to a file with snapshot feature, which just got deleted and not yet 
> purged. Better if we can have version of 
> {{LeaseManager#getINodeWithLeases()}} which returns IIP set only for 
> non-deleted files so that some of its users like createSnapshot which wants 
> to look at open files only don't get tripped on the deleted files.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12215) DataNode#transferBlock does not create its daemon in the xceiver thread group

2017-08-25 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12215:
-
Status: Patch Available  (was: Open)

> DataNode#transferBlock does not create its daemon in the xceiver thread group
> -
>
> Key: HDFS-12215
> URL: https://issues.apache.org/jira/browse/HDFS-12215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12215.00.patch
>
>
> As mentioned in HDFS-12044, DataNode#transferBlock daemon is not calculated 
> to xceiver count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-11720) LeaseManager#getINodeWithLeases() should support skipping leases of deleted files with snapshot feature

2017-08-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11720 started by Manoj Govindassamy.
-
> LeaseManager#getINodeWithLeases() should support skipping leases of deleted 
> files with snapshot feature
> ---
>
> Key: HDFS-11720
> URL: https://issues.apache.org/jira/browse/HDFS-11720
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>
> {{LeaseManager#getINodeWithLeases()}} currently returns a set of INodesInPath 
> for all the leases currently in the system. But, these leases could also 
> belong to a file with snapshot feature, which just got deleted and not yet 
> purged. Better if we can have version of 
> {{LeaseManager#getINodeWithLeases()}} which returns IIP set only for 
> non-deleted files so that some of its users like createSnapshot which wants 
> to look at open files only don't get tripped on the deleted files.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12215) DataNode#transferBlock does not create its daemon in the xceiver thread group

2017-08-25 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12215:
-
Attachment: HDFS-12215.00.patch

The patch simply puts {{DataTransfer}} to {{Daemon}} as the same in 
{{#transferBlock()}}.

> DataNode#transferBlock does not create its daemon in the xceiver thread group
> -
>
> Key: HDFS-12215
> URL: https://issues.apache.org/jira/browse/HDFS-12215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12215.00.patch
>
>
> As mentioned in HDFS-12044, DataNode#transferBlock daemon is not calculated 
> to xceiver count.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-08-25 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang reassigned HDFS-12357:


Assignee: Yongjun Zhang

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-08-25 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-12357:


 Summary: Let NameNode to bypass external attribute provider for 
special user
 Key: HDFS-12357
 URL: https://issues.apache.org/jira/browse/HDFS-12357
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang


This is a third proposal to solve the problem described in HDFS-12202.

The problem is, when we do distcp from one cluster to another (or within the 
same cluster), in addition to copying file data, we copy the metadata from 
source to target. If external attribute provider is enabled, the metadata may 
be read from the provider, thus provider data read from source may be saved to 
target HDFS. 

We want to avoid saving metadata from external provider to HDFS, so we want to 
bypass external provider when doing the distcp (or hadoop fs -cp) operation.

Two alternative approaches were proposed earlier, one in HDFS-12202, the other 
in HDFS-12294. The proposal here is the third one.

The idea is, we introduce a new config, that specifies a special user (or a 
list of users), and let NN bypass external provider when the current user is a 
special user.

If we run applications as the special user that need data from external 
attribute provider, then it won't work. So the constraint on this approach is, 
the special users here should not run applications that need data from external 
provider.

Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
[~manojg] for the discussions in the other jiras. 

I'm creating this one to discuss further.






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12321) Ozone : debug cli: add support to load user-provided SQL query

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142132#comment-16142132
 ] 

Hadoop QA commented on HDFS-12321:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-project generated 0 new + 469 unchanged 
- 2 fixed = 469 total (was 471) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 45s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.ozone.ksm.TestSQLQuery |
|   | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.ozone.scm.node.TestQueryNode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12321 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883776/HDFS-12321-HDFS-7240.010.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  shellcheck  shelldocs  |
| uname | Linux b0226698f906 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-10391) Always enable NameNode service RPC port

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142129#comment-16142129
 ] 

Hadoop QA commented on HDFS-10391:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-hdfs-project: The patch generated 21 new 
+ 1248 unchanged - 40 fixed = 1269 total (was 1288) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880561/HDFS-10391.010.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 5ef23b97930b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4b2c442 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20865/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20865/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test 

[jira] [Commented] (HDFS-12299) Race Between update pipeline and DN Re-Registration

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142120#comment-16142120
 ] 

Hudson commented on HDFS-12299:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12243 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12243/])
HDFS-12299. Race Between update pipeline and DN Re-Registration (kihwal: rev 
8455d70756b584ddf27fc626a147f4eb2e1dc94e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> Race Between update pipeline and DN Re-Registration
> ---
>
> Key: HDFS-12299
> URL: https://issues.apache.org/jira/browse/HDFS-12299
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12299-branch-2-002.patch, 
> HDFS-12299-branch-2.patch, HDFS-12299.patch
>
>
>  *Scenario*   
>  - Started pipeline with DN1->DN2->DN3
>  - DN1 is re-reg and update pipeline is called
>  - Update pipeline will success with DN1->DN3->DN4
>  - Again update pipeline is called,which will fail with NPE.
> In step3 updatepipeline will set the storages as null since DN1 re-reg(which 
> will remove and add the storages)
> {{FSNamesystem#updatePipelineInternal}}
> {code}
>lastBlock.getUnderConstructionFeature().setExpectedLocations(lastBlock,
> storages, lastBlock.getBlockType())
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12136) BlockSender performance regression due to volume scanner edge case

2017-08-25 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142118#comment-16142118
 ] 

Kihwal Lee commented on HDFS-12136:
---

I think the performance impact is less severe after HDFS-12157, so we could 
target 2.8.3 for the fix. 

> BlockSender performance regression due to volume scanner edge case
> --
>
> Key: HDFS-12136
> URL: https://issues.apache.org/jira/browse/HDFS-12136
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-12136.branch-2.patch, HDFS-12136.trunk.patch
>
>
> HDFS-11160 attempted to fix a volume scan race for a file appended mid-scan 
> by reading the last checksum of finalized blocks within the {{BlockSender}} 
> ctor.  Unfortunately it's holding the exclusive dataset lock to open and read 
> the metafile multiple times  Block sender instantiation becomes serialized.
> Performance completely collapses under heavy disk i/o utilization or high 
> xceiver activity.  Ex. lost node replication, balancing, or decommissioning.  
> The xceiver threads congest creating block senders and impair the heartbeat 
> processing that is contending for the same lock.  Combined with other lock 
> contention issues, pipelines break and nodes sporadically go dead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12084) Scheduled Count will not decrement when file is deleted before all IBR's received

2017-08-25 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142110#comment-16142110
 ] 

Kihwal Lee commented on HDFS-12084:
---

I don't know if it is related, but I see reserve RBW space gets stuck and never 
going downon certain datanode.

> Scheduled Count will not decrement when file is deleted before all IBR's 
> received
> -
>
> Key: HDFS-12084
> URL: https://issues.apache.org/jira/browse/HDFS-12084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12084-001.patch, HDFS-12084-002.patch, 
> HDFS-12084-003.patch, HDFS-12084-branch-2.patch
>
>
> When small files creation && deletion happens so frequently and DN's did not 
> report blocks to NN before deletion, then scheduled count will keep on 
> increment and which will not deleted as blocks are deleted.
> *Note*: Every 20 mins,this can be rolled, but with in 20 mins, count can be 
> more as so many operations.
> when batchIBR enabled with committed allowed=1 this will be observed more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12299) Race Between update pipeline and DN Re-Registration

2017-08-25 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-12299:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.2
   3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2, branch-2.8 and branch-2.8.2. Thanks for 
working on this, Brahma.

> Race Between update pipeline and DN Re-Registration
> ---
>
> Key: HDFS-12299
> URL: https://issues.apache.org/jira/browse/HDFS-12299
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12299-branch-2-002.patch, 
> HDFS-12299-branch-2.patch, HDFS-12299.patch
>
>
>  *Scenario*   
>  - Started pipeline with DN1->DN2->DN3
>  - DN1 is re-reg and update pipeline is called
>  - Update pipeline will success with DN1->DN3->DN4
>  - Again update pipeline is called,which will fail with NPE.
> In step3 updatepipeline will set the storages as null since DN1 re-reg(which 
> will remove and add the storages)
> {{FSNamesystem#updatePipelineInternal}}
> {code}
>lastBlock.getUnderConstructionFeature().setExpectedLocations(lastBlock,
> storages, lastBlock.getBlockType())
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142102#comment-16142102
 ] 

Arpit Agarwal commented on HDFS-11968:
--

Thanks for reporting this issue [~msingh]. A few comments:
# I don't think listing policies from all namespaces in getStoragePolicy is the 
right behavior. Policies with the same name may have different meanings in 
different clusters. It's okay to print an error message for viewfs and maybe 
list all available DistributedFileSystems so the administrator can query each.
# Can you please add javadocs to getResolvedPath to summarize the behavior? 
Also perhaps it should be renamed to getResolvedDfsPath.

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, 
> HDFS-11968.003.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-25 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-12356:
-

 Summary: Unit test for JN sync during Rolling Upgrade
 Key: HDFS-12356
 URL: https://issues.apache.org/jira/browse/HDFS-12356
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Adding unit tests for testing JN sync functionality during rolling upgrade of 
NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12299) Race Between update pipeline and DN Re-Registration

2017-08-25 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142051#comment-16142051
 ] 

Kihwal Lee commented on HDFS-12299:
---

+1 LGTM

> Race Between update pipeline and DN Re-Registration
> ---
>
> Key: HDFS-12299
> URL: https://issues.apache.org/jira/browse/HDFS-12299
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-12299-branch-2-002.patch, 
> HDFS-12299-branch-2.patch, HDFS-12299.patch
>
>
>  *Scenario*   
>  - Started pipeline with DN1->DN2->DN3
>  - DN1 is re-reg and update pipeline is called
>  - Update pipeline will success with DN1->DN3->DN4
>  - Again update pipeline is called,which will fail with NPE.
> In step3 updatepipeline will set the storages as null since DN1 re-reg(which 
> will remove and add the storages)
> {{FSNamesystem#updatePipelineInternal}}
> {code}
>lastBlock.getUnderConstructionFeature().setExpectedLocations(lastBlock,
> storages, lastBlock.getBlockType())
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12352) [branch-2] Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142013#comment-16142013
 ] 

Arpit Agarwal commented on HDFS-12352:
--

[~linyiqun] can you please commit this patch if you are +1?

> [branch-2] Use HDFS specific network topology to choose datanode in 
> BlockPlacementPolicyDefault
> ---
>
> Key: HDFS-12352
> URL: https://issues.apache.org/jira/browse/HDFS-12352
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12352-branch-2.001.patch
>
>
> This JIRA is to backport HDFS-11530 to branch-2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12321) Ozone : debug cli: add support to load user-provided SQL query

2017-08-25 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12321:
--
Attachment: HDFS-12321-HDFS-7240.010.patch

Fix checkstyle and findbugs in v010 patch

> Ozone : debug cli: add support to load user-provided SQL query
> --
>
> Key: HDFS-12321
> URL: https://issues.apache.org/jira/browse/HDFS-12321
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: ozone
>
> Attachments: HDFS-12321-HDFS-7240.001.patch, 
> HDFS-12321-HDFS-7240.002.patch, HDFS-12321-HDFS-7240.003.patch, 
> HDFS-12321-HDFS-7240.004.patch, HDFS-12321-HDFS-7240.005.patch, 
> HDFS-12321-HDFS-7240.006.patch, HDFS-12321-HDFS-7240.007.patch, 
> HDFS-12321-HDFS-7240.008.patch, HDFS-12321-HDFS-7240.009.patch, 
> HDFS-12321-HDFS-7240.010.patch
>
>
> This JIRA extends SQL CLI to support loading a user-provided file that 
> includes any sql query the user wants to run on the SQLite db obtained by 
> converting Ozone metadata db.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7764) DirectoryScanner shouldn't abort the scan if one directory had an error

2017-08-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141983#comment-16141983
 ] 

Arpit Agarwal commented on HDFS-7764:
-

Cherry-picked to branch-2.8.

> DirectoryScanner shouldn't abort the scan if one directory had an error
> ---
>
> Key: HDFS-7764
> URL: https://issues.apache.org/jira/browse/HDFS-7764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 2.9.0, 3.0.0-alpha1, 2.8.2
>
> Attachments: HDFS-7764-01.patch, HDFS-7764-02.patch, 
> HDFS-7764-03.patch, HDFS-7764-04.patch, HDFS-7764.patch
>
>
> If there is an exception while preparing the ScanInfo for the blocks in the 
> directory, DirectoryScanner is immediately throwing exception and coming out 
> of the current scan cycle. The idea of this jira is to discuss & improve the 
> exception handling mechanism.
> DirectoryScanner.java
> {code}
> for (Entry report :
> compilersInProgress.entrySet()) {
>   try {
> dirReports[report.getKey()] = report.getValue().get();
>   } catch (Exception ex) {
> LOG.error("Error compiling report", ex);
> // Propagate ex to DataBlockScanner to deal with
> throw new RuntimeException(ex);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141981#comment-16141981
 ] 

Hadoop QA commented on HDFS-12336:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 96 unchanged - 0 fixed = 97 total (was 96) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883763/HDFS-12336.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 503af96149ee 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3a4e861 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20863/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20863/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20863/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

  1   2   >