[jira] [Created] (HDFS-12929) There is error message when hdfs dfsadmin is run against a ViewFS config

2017-12-15 Thread KaiXinXIaoLei (JIRA)
KaiXinXIaoLei created HDFS-12929:


 Summary: There is  error message when hdfs dfsadmin is run against 
a ViewFS config
 Key: HDFS-12929
 URL: https://issues.apache.org/jira/browse/HDFS-12929
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: KaiXinXIaoLei


In viewfs config, i run "hdfs dfsadmin -safemode get", there is error:

{noformat}
safemode: FileSystem viewfs://XX/ is not an HDFS file system
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-15 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12930:


 Summary: Remove the extra space in HdfsImageViewer.md
 Key: HDFS-12930
 URL: https://issues.apache.org/jira/browse/HDFS-12930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: newbie
Affects Versions: 3.0.0
Reporter: Yiqun Lin
Priority: Trivial


There is one extra space in HdfsImageViewer.md that leads page rendered error.
{noformat}
* [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
* [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
* [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12931) Hive external table create fails because of failure to fetch block MD5 checksum

2017-12-15 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12931:


 Summary: Hive external table create fails because of failure to 
fetch block MD5 checksum
 Key: HDFS-12931
 URL: https://issues.apache.org/jira/browse/HDFS-12931
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Hive external table create fails because of HDFS fails to fetch  block MD5 
checksum.
This happens because of the InvalidEncryptionKeyException.

{code}
org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
Can't re-compute encryption key for nonce, since the required block key 
(keyID=-1675775329) doesn't exist. Current key: 1496422662
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:478)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:300)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:241)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.newSocketSend(SaslDataTransferClient.java:142)
at org.apache.hadoop.hdfs.DFSClient.connectToDN(DFSClient.java:2450)
at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:2310)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1569)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1565)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1577)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12930:
-
Description: 
There is one extra space in HdfsImageViewer.md that leads page rendered error.
{noformat}
* [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
* [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
* [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
{noformat}
Can see hadoop 3.0 
website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html

  was:
There is one extra space in HdfsImageViewer.md that leads page rendered error.
{noformat}
* [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
* [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
* [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
{noformat}


> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: newbie
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Priority: Trivial
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12930:
-
Description: 
There is one extra space in HdfsImageViewer.md that leads page rendered error.
{noformat}
* [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
* [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
* [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
{noformat}
Can see hadoop 3.0 
website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor

  was:
There is one extra space in HdfsImageViewer.md that leads page rendered error.
{noformat}
* [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
* [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
* [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
{noformat}
Can see hadoop 3.0 
website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html


> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: newbie
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Priority: Trivial
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12930:
-
Labels: newbie  (was: )

> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: newbie
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12930:
-
Component/s: documentation

> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12930:
-
Component/s: (was: newbie)

> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12930) Remove the extra space in HdfsImageViewer.md

2017-12-15 Thread Rahul Pathak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292336#comment-16292336
 ] 

Rahul Pathak commented on HDFS-12930:
-

Hi [~linyiqun]

I will work on this and upload the patch.

Can you please assign this to me?

Rahul

> Remove the extra space in HdfsImageViewer.md
> 
>
> Key: HDFS-12930
> URL: https://issues.apache.org/jira/browse/HDFS-12930
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
>
> There is one extra space in HdfsImageViewer.md that leads page rendered error.
> {noformat}
> * [GETXATTRS](./WebHDFS.html#Get_an_XAttr)
> * [LISTXATTRS](./WebHDFS.html#List_all_XAttrs)
> * [CONTENTSUMMARY] (./WebHDFS.html#Get_Content_Summary_of_a_Directory)
> {noformat}
> Can see hadoop 3.0 
> website:http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html#Web_Processor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12931) Hive external table create fails because of failure to fetch block MD5 checksum

2017-12-15 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292357#comment-16292357
 ] 

Mukul Kumar Singh commented on HDFS-12931:
--

Thanks [~xyao] for root causing this issue. 

This issue happens because when the SASL encryption keys is changed, there is a 
time gap when the datanode isn't updated with the new keys. 
InvalidEncryptionKeyException exception can be seen in this case. One of the 
ways to solve this issue is by retrying for a certain number of times in case 
this error is seen.

> Hive external table create fails because of failure to fetch block MD5 
> checksum
> ---
>
> Key: HDFS-12931
> URL: https://issues.apache.org/jira/browse/HDFS-12931
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> Hive external table create fails because of HDFS fails to fetch  block MD5 
> checksum.
> This happens because of the InvalidEncryptionKeyException.
> {code}
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=-1675775329) doesn't exist. Current key: 1496422662
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:478)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:300)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:241)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.newSocketSend(SaslDataTransferClient.java:142)
>   at org.apache.hadoop.hdfs.DFSClient.connectToDN(DFSClient.java:2450)
>   at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:2310)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1569)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1565)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1577)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-12-15 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12799:

Attachment: HDFS-12799-HDFS-7240.004.patch

> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch, 
> HDFS-12799-HDFS-7240.002.patch, HDFS-12799-HDFS-7240.003.patch, 
> HDFS-12799-HDFS-7240.004.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-12-15 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292589#comment-16292589
 ] 

Elek, Marton commented on HDFS-12799:
-

Thx [~vagarychen], I just uploaded a patch with fixed checkstyle issues.

> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch, 
> HDFS-12799-HDFS-7240.002.patch, HDFS-12799-HDFS-7240.003.patch, 
> HDFS-12799-HDFS-7240.004.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12698) Ozone: Use time units in the Ozone configuration values

2017-12-15 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292592#comment-16292592
 ] 

Elek, Marton commented on HDFS-12698:
-

Sure, you are right. Thanks to point to it. Will create a new patch soon.

> Ozone: Use time units in the Ozone configuration values
> ---
>
> Key: HDFS-12698
> URL: https://issues.apache.org/jira/browse/HDFS-12698
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12698-HDFS-7240.001.patch, 
> HDFS-12698-HDFS-7240.002.patch, HDFS-12698-HDFS-7240.003.patch, 
> HDFS-12698-HDFS-7240.005.patch, HDFS-12698-HDFS-7240.006.patch, 
> HDFS-12698-HDFS-7240.007.patch, HDFS-12698-HDFS-7240.008.patch
>
>
> In HDFS-9847 introduced a new way to configure the time related configuration 
> with using time unit in the vaule (eg. 10s, 5m, ...).
> Because the new behavior I have seen a lot of warning during my tests:
> {code}
> 2017-10-19 18:35:19,955 [main] INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1306)) - No unit for 
> scm.container.client.idle.threshold(1) assuming MILLISECONDS
> {code}
> So we need to add the time unit for every configuration. Unfortunately we 
> have a few configuration parameter which includes the unit in the key name 
> (eg dfs.cblock.block.buffer.flush.interval.seconds or 
> ozone.container.report.interval.ms).
> I suggest to remove all the units from the key name and follow the new 
> convention where any of the units could be used. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12698) Ozone: Use time units in the Ozone configuration values

2017-12-15 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12698:

Attachment: HDFS-12698-HDFS-7240.009.patch

> Ozone: Use time units in the Ozone configuration values
> ---
>
> Key: HDFS-12698
> URL: https://issues.apache.org/jira/browse/HDFS-12698
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12698-HDFS-7240.001.patch, 
> HDFS-12698-HDFS-7240.002.patch, HDFS-12698-HDFS-7240.003.patch, 
> HDFS-12698-HDFS-7240.005.patch, HDFS-12698-HDFS-7240.006.patch, 
> HDFS-12698-HDFS-7240.007.patch, HDFS-12698-HDFS-7240.008.patch, 
> HDFS-12698-HDFS-7240.009.patch
>
>
> In HDFS-9847 introduced a new way to configure the time related configuration 
> with using time unit in the vaule (eg. 10s, 5m, ...).
> Because the new behavior I have seen a lot of warning during my tests:
> {code}
> 2017-10-19 18:35:19,955 [main] INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1306)) - No unit for 
> scm.container.client.idle.threshold(1) assuming MILLISECONDS
> {code}
> So we need to add the time unit for every configuration. Unfortunately we 
> have a few configuration parameter which includes the unit in the key name 
> (eg dfs.cblock.block.buffer.flush.interval.seconds or 
> ozone.container.report.interval.ms).
> I suggest to remove all the units from the key name and follow the new 
> convention where any of the units could be used. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12698) Ozone: Use time units in the Ozone configuration values

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292622#comment-16292622
 ] 

genericqa commented on HDFS-12698:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12698 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12698 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902376/HDFS-12698-HDFS-7240.009.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22418/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Use time units in the Ozone configuration values
> ---
>
> Key: HDFS-12698
> URL: https://issues.apache.org/jira/browse/HDFS-12698
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12698-HDFS-7240.001.patch, 
> HDFS-12698-HDFS-7240.002.patch, HDFS-12698-HDFS-7240.003.patch, 
> HDFS-12698-HDFS-7240.005.patch, HDFS-12698-HDFS-7240.006.patch, 
> HDFS-12698-HDFS-7240.007.patch, HDFS-12698-HDFS-7240.008.patch, 
> HDFS-12698-HDFS-7240.009.patch
>
>
> In HDFS-9847 introduced a new way to configure the time related configuration 
> with using time unit in the vaule (eg. 10s, 5m, ...).
> Because the new behavior I have seen a lot of warning during my tests:
> {code}
> 2017-10-19 18:35:19,955 [main] INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1306)) - No unit for 
> scm.container.client.idle.threshold(1) assuming MILLISECONDS
> {code}
> So we need to add the time unit for every configuration. Unfortunately we 
> have a few configuration parameter which includes the unit in the key name 
> (eg dfs.cblock.block.buffer.flush.interval.seconds or 
> ozone.container.report.interval.ms).
> I suggest to remove all the units from the key name and follow the new 
> convention where any of the units could be used. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12741) ADD support for KSM --createObjectStore command

2017-12-15 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12741:
---
Attachment: HDFS-12741-HDFS-7240.008.patch

Patch v8: Added the java doc comments. 

> ADD support for KSM --createObjectStore command
> ---
>
> Key: HDFS-12741
> URL: https://issues.apache.org/jira/browse/HDFS-12741
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12741-HDFS-7240.001.patch, 
> HDFS-12741-HDFS-7240.002.patch, HDFS-12741-HDFS-7240.003.patch, 
> HDFS-12741-HDFS-7240.004.patch, HDFS-12741-HDFS-7240.005.patch, 
> HDFS-12741-HDFS-7240.006.patch, HDFS-12741-HDFS-7240.007.patch, 
> HDFS-12741-HDFS-7240.008.patch
>
>
> KSM --createObjectStore command reads the ozone configuration information and 
> creates the KSM version file and reads the SCM version file from the SCM 
> specified.
>   
> The SCM version file is stored in the KSM metadata directory and before 
> communicating with an SCM KSM verifies that it is communicating with an SCM 
> where the relationship has been established via createObjectStore command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292650#comment-16292650
 ] 

genericqa commented on HDFS-12799:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
24s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m 10s{color} | 
{color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
27s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12799 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902372/HDFS-12799-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 31cb22494426 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 43a1334 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22417/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22417/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-

[jira] [Assigned] (HDFS-12555) HDFS federation should support configure secondary directory

2017-12-15 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDFS-12555:


Assignee: luoge123

> HDFS federation should support configure secondary directory 
> -
>
> Key: HDFS-12555
> URL: https://issues.apache.org/jira/browse/HDFS-12555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
> Environment: 2.6.0-cdh5.10.0
>Reporter: luoge123
>Assignee: luoge123
> Fix For: 2.6.0
>
> Attachments: HDFS-12555.001.patch, HDFS-12555.002.patch
>
>
> HDFS federation support multiple namenodes horizontally scales the file 
> system namespace. As the amount of data grows, using a single group of 
> namenodes to manage a single directory, namenode still achieves performance 
> bottlenecks. In order to reduce the pressure of namenode, we can split out 
> the secondary directory, and manager it by a new namenode. This is  
> transparent for users. 
> For example, nn1 only manager the /user directory, when nn1 achieve 
> performance bottlenecks, we can split out /user/hive directory, and ues nn2 
> to manager it.
> That means core-site.xml should support as follows configuration.
>
>fs.viewfs.mounttable.nsX.link./user
>hdfs://nn1:8020/user
> 
> 
>fs.viewfs.mounttable.nsX.link./user/hive
>hdfs://nn2:8020/user/hive
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12555) HDFS federation should support configure secondary directory

2017-12-15 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292719#comment-16292719
 ] 

Arpit Agarwal commented on HDFS-12555:
--

[~luoge123], I've added you as a contributor and assigned this Jira to you.

Thank you for contributing to HDFS!

> HDFS federation should support configure secondary directory 
> -
>
> Key: HDFS-12555
> URL: https://issues.apache.org/jira/browse/HDFS-12555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
> Environment: 2.6.0-cdh5.10.0
>Reporter: luoge123
>Assignee: luoge123
> Fix For: 2.6.0
>
> Attachments: HDFS-12555.001.patch, HDFS-12555.002.patch
>
>
> HDFS federation support multiple namenodes horizontally scales the file 
> system namespace. As the amount of data grows, using a single group of 
> namenodes to manage a single directory, namenode still achieves performance 
> bottlenecks. In order to reduce the pressure of namenode, we can split out 
> the secondary directory, and manager it by a new namenode. This is  
> transparent for users. 
> For example, nn1 only manager the /user directory, when nn1 achieve 
> performance bottlenecks, we can split out /user/hive directory, and ues nn2 
> to manager it.
> That means core-site.xml should support as follows configuration.
>
>fs.viewfs.mounttable.nsX.link./user
>hdfs://nn1:8020/user
> 
> 
>fs.viewfs.mounttable.nsX.link./user/hive
>hdfs://nn2:8020/user/hive
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12929) There is error message when hdfs dfsadmin is run against a ViewFS config

2017-12-15 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292749#comment-16292749
 ] 

Arpit Agarwal commented on HDFS-12929:
--

Hi [~KaiXinXIaoLei], this behavior probably makes sense since viewfs 
configuration may include multiple underlying HDFS/non-HDFS filesystems. In 
that case it is not clear which filesystem the dfsadmin command should operate 
against.

> There is  error message when hdfs dfsadmin is run against a ViewFS config
> -
>
> Key: HDFS-12929
> URL: https://issues.apache.org/jira/browse/HDFS-12929
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: KaiXinXIaoLei
>
> In viewfs config, i run "hdfs dfsadmin -safemode get", there is error:
> {noformat}
> safemode: FileSystem viewfs://XX/ is not an HDFS file system
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12904:
---
Attachment: HDFS-12904.001.patch

Thanks [~hanishakoneru] for checking.
I agree, we should distinguish the data transfer in the configuration key, 
added in [^HDFS-12904.001.patch].
I personally think we should even split client from internal replication but 
that's for another JIRA as it would require a pretty big rewriting of the code 
using the {{DataXceiver}}.

> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292797#comment-16292797
 ] 

Íñigo Goiri commented on HDFS-12904:


For the checkstyles:
* I wouldn't change the visibility of the {{dataThrottler}} to be consistent 
with the definition of the other throttler.
* I would not split the key even being longer than 80 characters, there are 
other exceptions in {{DFSConfigKeys}} (QA hasn't complained yet but it will for 
[^HDFS-12904.001.patch]).

> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292799#comment-16292799
 ] 

Íñigo Goiri commented on HDFS-12895:


Thanks [~linyiqun] for working on this.
We will start using it right away.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12895-branch-2.001.patch, HDFS-12895.001.patch, 
> HDFS-12895.002.patch, HDFS-12895.003.patch, HDFS-12895.004.patch, 
> HDFS-12895.005.patch, HDFS-12895.006.patch, HDFS-12895.007.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-15 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12818:
---
Attachment: HDFS-12818.009.patch

Great, thanks [~shv]. Attaching final v009 patch fixing checkstyle.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch, HDFS-12818.007.patch, 
> HDFS-12818.008.patch, HDFS-12818.009.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292829#comment-16292829
 ] 

Kihwal Lee commented on HDFS-12070:
---

I have recently encountered two occurrences of this, all involving a faulty 
drive.  In general, if a recovery attempt on a faulty node fails in stage 1, 
everything will be fine as the node will be excluded in stage 2.  If not, there 
will be perpetual failures and the lease won't be recovered. Presence of a 
faulty drive tends to cause this issue.

- Case 1) During the very first recovery attempt, the rename of meta file 
succeeded during finalization in the stage 2 but the data file rename failed. 
This caused perpetual failures in subsequent recovery attempts.

- Case 2) The drive containing a replica to be recovered failed and "removed" 
hours before the first recovery attempt.  But the recovery included the node 
and it still manage to find the replica info to successfully complete the stage 
1. The stage 2 fails as the file system is read-only and files cannot be 
renamed.  Hence perpetual recovery failures.

Fixing case 1 specifically is easy. The existing limited scope meta/data file 
existence check in stage 1 can be expanded to all replica state before 
proceeding. If the meta or the data file is missing, there is no point in 
declaring a success in stage 1. The node will be excluded in stage 2, so the 
recovery will succeed in the second attempt.

{{\_...\_}}

While looking at case 2 though, I realized that it is a much more complicated 
issue.  First of all, why are we failing the entire recovery when a node failed 
in stage 2? Can't we simply exclude the failed node and commit?  The following 
is the code in stage 2 that causes recovery failure.
{code}
  // If any of the data-nodes failed, the recovery fails, because
  // we never know the actual state of the replica on failed data-nodes.
  // The recovery should be started over.
  if (!failedList.isEmpty()) {
throw new IOException("Cannot recover " + block
+ ", the following datanodes failed: " + failedList);
  }
{code}
This isn't in 0.20.205 or 1.x, but is present in 0.21 and later. It took some 
time (had to go back to svn) to trace down.  This was added by HDFS-658 as a 
part of the "new" append feature. branch-1 had the "old" append.  According to 
the design doc, the recovery should not end there.

{panel}
6.5. Block 
Recovery
(...)
c. Recover 
replicas 
that 
participated
 in
 length
 agreement
 in 
step 

b.iv.
 (Ed. stage 1)
d. PD (Ed. primary datanode) 
checks
 the
 result 
of 
c. 
If 
no 
DataNode 

succeeds,
 block
 recovery
 fails. *
If 
some
 succeed
 and 
some
 fail, 
PD
 
gets 
a
 new
 generation
 stamp
 from
 NN 
and 
repeats block
 recovery
 with
 
the
 successful 
DataNodes.* (...)
{panel}

The current recovery should fail, but the next recovery should be tried with 
only the successful ones.  This isn't the case (and causes perpetual failures), 
so we can call this *an incomplete implementation* of the design.

To fully conform to the design, the PD needs to be able to initiate a new 
recovery or tell the namenode to exclude the failed node from the expected 
locations.  Alternatively, the PD can tell the failed node to reject further 
participation in recovery of the block, thus making it fail in stage 1. 
However, it might be less reliable as it involves a faulty node. To trigger an 
immediate retry of recovery (i.e. not 1 hour later), active notification from 
PD to NN will be necessary.

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---

[jira] [Commented] (HDFS-12741) ADD support for KSM --createObjectStore command

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292925#comment-16292925
 ] 

genericqa commented on HDFS-12741:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902379/HDFS-12741-HDFS-7240.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux aa9519282818 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 43a1334 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22419/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/jo

[jira] [Commented] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292929#comment-16292929
 ] 

Kihwal Lee commented on HDFS-12070:
---

bq. the PD needs to ... tell the namenode to exclude the failed node from the 
expected locations. 
It appears calling {{commitBlockSynchronization()}} with {{closeFile == false}} 
might do the trick. On the NN size, we could make it do block/lease recovery 
again soon. The older NNs will still work, but with 1 hour delay until the 
retry.   

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292929#comment-16292929
 ] 

Kihwal Lee edited comment on HDFS-12070 at 12/15/17 5:59 PM:
-

bq. the PD needs to ... tell the namenode to exclude the failed node from the 
expected locations. 
It appears calling {{commitBlockSynchronization()}} with {{closeFile == false}} 
might do the trick. On the NN side, we could make it do block/lease recovery 
again soon. The older NNs will still work, but with 1 hour delay until the 
retry.   


was (Author: kihwal):
bq. the PD needs to ... tell the namenode to exclude the failed node from the 
expected locations. 
It appears calling {{commitBlockSynchronization()}} with {{closeFile == false}} 
might do the trick. On the NN size, we could make it do block/lease recovery 
again soon. The older NNs will still work, but with 1 hour delay until the 
retry.   

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12712) [9806] Code style cleanup

2017-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292939#comment-16292939
 ] 

Íñigo Goiri commented on HDFS-12712:


Not sure why Yetus is giving all the deprecations on earth but 
[^HDFS-12712-HDFS-9806.003.patch] LGTM.
Failed unit tests also seem unrelated.
+1

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch, HDFS-12712-HDFS-9806.003.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12819) Setting/Unsetting EC policy shows warning if the directory is not empty

2017-12-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12819:
-
   Resolution: Fixed
Fix Version/s: 3.0.1
   Status: Resolved  (was: Patch Available)

Thanks [~xiaochen]

Committed to {{trunk}} and {{branch-3.0}}

> Setting/Unsetting EC policy shows warning if the directory is not empty
> ---
>
> Key: HDFS-12819
> URL: https://issues.apache.org/jira/browse/HDFS-12819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: HDFS-12819.00.patch, HDFS-12819.01.patch, 
> HDFS-12819.02.patch
>
>
> Because the existing data will not be converted when we set or unset EC 
> policy on a directory, a warning from CLI would help to clear user's 
> expectation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12712) [9806] Code style cleanup

2017-12-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292962#comment-16292962
 ] 

Virajith Jalaparti commented on HDFS-12712:
---

Thanks for taking a look [~elgoiri]. Committing 
[^HDFS-12712-HDFS-9806.003.patch] to feature branch.

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch, HDFS-12712-HDFS-9806.003.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch, HDFS-12712-HDFS-9806.003.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Open  (was: Patch Available)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292963#comment-16292963
 ] 

Ajay Kumar commented on HDFS-12881:
---

All test failures for branch-2 passed locally. 

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12881-branch-2.10.0.001.patch, 
> HDFS-12881.001.patch, HDFS-12881.002.patch, HDFS-12881.003.patch, 
> HDFS-12881.004.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12885) Add visibility/stability annotations

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12885:
-

Assignee: Chris Douglas

> Add visibility/stability annotations
> 
>
> Key: HDFS-12885
> URL: https://issues.apache.org/jira/browse/HDFS-12885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12885-HDFS-9806.00.patch, 
> HDFS-12885-HDFS-9806.001.patch
>
>
> Classes added in HDFS-9806 should include stability/visibility annotations 
> (HADOOP-5073)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12819) Setting/Unsetting EC policy shows warning if the directory is not empty

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292974#comment-16292974
 ] 

Hudson commented on HDFS-12819:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13387 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13387/])
HDFS-12819. Setting/Unsetting EC policy shows warning if the directory (lei: 
rev 1c15b1751c0698bd3063d5c25f556d4821b161d2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml


> Setting/Unsetting EC policy shows warning if the directory is not empty
> ---
>
> Key: HDFS-12819
> URL: https://issues.apache.org/jira/browse/HDFS-12819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: HDFS-12819.00.patch, HDFS-12819.01.patch, 
> HDFS-12819.02.patch
>
>
> Because the existing data will not be converted when we set or unset EC 
> policy on a directory, a warning from CLI would help to clear user's 
> expectation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12927) Update erasure coding doc to address unsupported APIs

2017-12-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12927:
-
   Resolution: Fixed
Fix Version/s: 3.0.1
   Status: Resolved  (was: Patch Available)

Thanks for the review, [~xiaochen]

> Update erasure coding doc to address unsupported APIs
> -
>
> Key: HDFS-12927
> URL: https://issues.apache.org/jira/browse/HDFS-12927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.1
>
> Attachments: HDFS-12927.00.patch
>
>
> {{Concat}}, {{truncate}}, {{setReplication}} are not (fully) supported with 
> EC files. We should update the document to address them explicitly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292994#comment-16292994
 ] 

Kihwal Lee commented on HDFS-12070:
---

To complete the history lesson, I traced down when {{closeFile}} was added to 
{{commitBlockSynchronization()}} and why no one is calling it with {{false}} 
anymore.

It turns out, the {{closeFile}} argument has existed since the dawn of 
{{commitBlockSynchronization()}}. It was added by HADOOP-3310 to 0.18 in 2008. 
The old append dependeds on it.  Even in this, the normal lease recovery would 
always call it with {{closeFile == true}}.  There was a new 
{{ClientDatanodeProtocol}} method, {{recoverBlock()}}, which causes 
{{commitBlockSynchronization()}} to be called with {{closeFile == false}}.  I 
guess this disappeard when {{recoverBlock()}} client command was removed from 
datanode. Today, a {{recoverLease()}} call to namenode can be used instead.  It 
is really fortunate that the {{closeFile}} option was initially added and has 
survived for 9 years in spite of lack use.

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-12070:
-

Assignee: Kihwal Lee

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Attachment: HDFS-9806.003.patch

Posting a rebased patch with all changes in HDFS-9806 feature branch.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Patch Available  (was: Open)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12927) Update erasure coding doc to address unsupported APIs

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293020#comment-16293020
 ] 

Hudson commented on HDFS-12927:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13388 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13388/])
HDFS-12927. Update erasure coding doc to address unsupported APIs. (lei: rev 
949be14b0881186d76c3b60ee2f39ce67dc1654c)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md


> Update erasure coding doc to address unsupported APIs
> -
>
> Key: HDFS-12927
> URL: https://issues.apache.org/jira/browse/HDFS-12927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.1
>
> Attachments: HDFS-12927.00.patch
>
>
> {{Concat}}, {{truncate}}, {{setReplication}} are not (fully) supported with 
> EC files. We should update the document to address them explicitly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-12-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293035#comment-16293035
 ] 

Chen Liang commented on HDFS-12799:
---

Thanks [~elek] for the update! Looks like there were compilation failures, 
might need to be rebased, would you mind taking a look? Thanks

> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch, 
> HDFS-12799-HDFS-7240.002.patch, HDFS-12799-HDFS-7240.003.patch, 
> HDFS-12799-HDFS-7240.004.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-12-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292994#comment-16292994
 ] 

Kihwal Lee edited comment on HDFS-12070 at 12/15/17 7:20 PM:
-

To complete the history lesson, I traced down when {{closeFile}} was added to 
{{commitBlockSynchronization()}} and why no one is calling it with {{false}} 
anymore.

It turns out, the {{closeFile}} argument has existed since the dawn of 
{{commitBlockSynchronization()}}. It was added by HADOOP-3310 to 0.18 in 2008. 
The old append (HADOOP-1700) dependeds on it.  Even in this, the normal lease 
recovery would always call it with {{closeFile == true}}.  There was a new 
{{ClientDatanodeProtocol}} method, {{recoverBlock()}}, which causes 
{{commitBlockSynchronization()}} to be called with {{closeFile == false}}.  I 
guess this disappeard when {{recoverBlock()}} client command was removed from 
datanode. Today, a {{recoverLease()}} call to namenode can be used instead.  It 
is really fortunate that the {{closeFile}} option was initially added and has 
survived for 9 years in spite of lack use.


was (Author: kihwal):
To complete the history lesson, I traced down when {{closeFile}} was added to 
{{commitBlockSynchronization()}} and why no one is calling it with {{false}} 
anymore.

It turns out, the {{closeFile}} argument has existed since the dawn of 
{{commitBlockSynchronization()}}. It was added by HADOOP-3310 to 0.18 in 2008. 
The old append dependeds on it.  Even in this, the normal lease recovery would 
always call it with {{closeFile == true}}.  There was a new 
{{ClientDatanodeProtocol}} method, {{recoverBlock()}}, which causes 
{{commitBlockSynchronization()}} to be called with {{closeFile == false}}.  I 
guess this disappeard when {{recoverBlock()}} client command was removed from 
datanode. Today, a {{recoverLease()}} call to namenode can be used instead.  It 
is really fortunate that the {{closeFile}} option was initially added and has 
survived for 9 years in spite of lack use.

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293090#comment-16293090
 ] 

genericqa commented on HDFS-12818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 46s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 393 unchanged - 
1 fixed = 394 total (was 394) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 52 unchanged - 8 fixed = 52 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  8s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902410/HDFS-12818.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2414f31c3e27 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09d996f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22421/artifac

[jira] [Commented] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293097#comment-16293097
 ] 

genericqa commented on HDFS-12904:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 479 unchanged - 0 fixed = 481 total (was 479) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12904 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902404/HDFS-12904.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux aa3fb1eeac8b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 09d996f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22420/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job

[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293109#comment-16293109
 ] 

Misha Dmitriev commented on HDFS-12051:
---

Test failures above (some with OOM) look rather strange. I doubt that they are 
related with my change.

I've fixed one checkstyle problem that I've introduced. The other two are about 
long lines, but in the file in question (DFSConfigKeys.java) all lines are 
long, so this is irrelevant.

The findbugs warning is about some code that I didn't write, that just happens 
to be in one of the files that I've touched.

I am now submitting one more patch with some comments fixed/improved.

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.B

[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Status: In Progress  (was: Patch Available)

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: o

[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Status: Patch Available  (was: In Progress)

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: o

[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Attachment: HDFS-12051.05.patch

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.ha

[jira] [Commented] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293193#comment-16293193
 ] 

genericqa commented on HDFS-10348:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-10348 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10348 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12801965/HDFS-10348-1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22425/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Namenode report bad block method doesn't check whether the block belongs to 
> datanode before adding it to corrupt replicas map.
> --
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10348-1.patch, HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block 
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from   
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1  to corrrupt replicas map   and 
> ask one of the good node (one of the 2 nodes) to replicate the block to 
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to 
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
>It also removed N3's storage from triplets and queued an invalidate 
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to 
> node N3.
>C2 also encountered the checksum exception and reported bad block to 
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas 
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode 
> simply ignores the report since the N3's storage is no longer in the 
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the 
> corruptReplciasMap. 
> Since on removing the node, we only goes through the block which are present 
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via 
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the 
> BlockManager#markBlockAsCorrupt instead of 
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
>   storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
>   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
>   blk, dn);
>   return;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10614) Appended blocks can be closed even before IBRs from DataNodes

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293195#comment-16293195
 ] 

genericqa commented on HDFS-10614:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-10614 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10614 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865892/HDFS-10614.03.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22427/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Appended blocks can be closed even before IBRs from DataNodes
> -
>
> Key: HDFS-10614
> URL: https://issues.apache.org/jira/browse/HDFS-10614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10614.01.patch, HDFS-10614.02.patch, 
> HDFS-10614.03.patch
>
>
> Scenario:
>1. Open the file for append()
>2. Trigger append pipeline setup by adding some data.
>3. Consider RECEIVING IBRs of DNs reaches NN first.
>4. updatePipeline() rpc sent to namenode to update the pipeline.
>5. Now, if complete() is called on the file even before closing the 
> pipeline, then block will be COMPLETE, even before block is actually 
> FINALIZED at DN side and file will be closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293196#comment-16293196
 ] 

genericqa commented on HDFS-10477:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-10477 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10477 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817992/HDFS-10477.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22428/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.002.patch, HDFS-10477.003.patch, 
> HDFS-10477.004.patch, HDFS-10477.005.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicat

[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-15 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293222#comment-16293222
 ] 

Jason Lowe commented on HDFS-12881:
---

Thanks for the branch-2 patch!  +1 lgtm.  I agree the unit tests failures 
appear to be unrelated, and I verified those tests pass locally with the patch 
applied.

Committing this.



> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12881-branch-2.10.0.001.patch, 
> HDFS-12881.001.patch, HDFS-12881.002.patch, HDFS-12881.003.patch, 
> HDFS-12881.004.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HDFS-12881:
--
   Resolution: Fixed
Fix Version/s: 2.7.6
   2.8.4
   2.9.1
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks, Ajay!  I committed this to branch-2, branch-2.9, branch-2.8, and 
branch-2.7 as well.


> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6
>
> Attachments: HDFS-12881-branch-2.10.0.001.patch, 
> HDFS-12881.001.patch, HDFS-12881.002.patch, HDFS-12881.003.patch, 
> HDFS-12881.004.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12925:
--
Attachment: HDFS-12925-HDFS-7240.003.patch

v003 patch fixes the checkstyle and javadoc issue, findbug warnings are not 
introduced by this patch. The failed tests all passed locally, except for the 
consistently failing test {{TestOzoneRpcClient}}

> Ozone: Container : Add key versioning support-2
> ---
>
> Key: HDFS-12925
> URL: https://issues.apache.org/jira/browse/HDFS-12925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12925-HDFS-7240.001.patch, 
> HDFS-12925-HDFS-7240.002.patch, HDFS-12925-HDFS-7240.003.patch
>
>
> One component for versioning is assembling read IO vector, (please see 4.2 
> section of the [versioning design 
> doc|https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion.001.pdf]
>  under HDFS-12000 for the detail). This JIRA adds the util functions that 
> takes a list with blocks from different versions and properly generate the 
> read vector for the requested version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293273#comment-16293273
 ] 

Chen Liang edited comment on HDFS-12925 at 12/15/17 9:31 PM:
-

v003 patch fixes the checkstyle and javadoc issue, findbug warnings are not 
introduced by this patch. The failed tests all passed locally, except for 
{{TestOzoneRpcClient}} which fails consistently even without the patch


was (Author: vagarychen):
v003 patch fixes the checkstyle and javadoc issue, findbug warnings are not 
introduced by this patch. The failed tests all passed locally, except for the 
consistently failing test {{TestOzoneRpcClient}}

> Ozone: Container : Add key versioning support-2
> ---
>
> Key: HDFS-12925
> URL: https://issues.apache.org/jira/browse/HDFS-12925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12925-HDFS-7240.001.patch, 
> HDFS-12925-HDFS-7240.002.patch, HDFS-12925-HDFS-7240.003.patch
>
>
> One component for versioning is assembling read IO vector, (please see 4.2 
> section of the [versioning design 
> doc|https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion.001.pdf]
>  under HDFS-12000 for the detail). This JIRA adds the util functions that 
> takes a list with blocks from different versions and properly generate the 
> read vector for the requested version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-12917:
-

Assignee: chencan

> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12917.002.patch, HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-15 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293301#comment-16293301
 ] 

Chen Liang commented on HDFS-12917:
---

Thanks [~candychencan] for the updated patch! +1 on v002 patch, I've committed 
to trunk, (and I've changed the assignee of this JIRA to you). Thanks for your 
contribution!

> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12917.002.patch, HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12917:
--
  Resolution: Fixed
Target Version/s: 3.1.0
  Status: Resolved  (was: Patch Available)

> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12917.002.patch, HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293332#comment-16293332
 ] 

Hudson commented on HDFS-12917:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13389 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13389/])
HDFS-12917. Fix description errors in testErasureCodingConf.xml. (cliang: rev 
aa503a29d0bba4725a10623a96f9220c9389117c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml


> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12917.002.patch, HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293370#comment-16293370
 ] 

genericqa commented on HDFS-12641:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 652 unchanged - 1 fixed = 654 total (was 653) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
17s{color} | {color:red} The patch generated 435 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:19 |
| Failed junit tests | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.TestFileLengthOnClusterRestart |
|   | hadoop.hdfs.TestBlockMissingException |
|   | hadoop.hdfs.TestHDFSTrash |
|   | hadoop.hdfs.TestDFSShellGenericOptions |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.TestFSInputChecker |
|   | hadoop.hdfs.TestFileCreationClient |
|   | hadoop.hdfs.TestSnapshotCommands |
| Timed out junit tests | org.apache.hadoop.hdfs.TestHdfsAdmin |
|   | org.apache.hadoop.hdfs.TestSetrepDecreasing |
|   | org.apache.hadoop.hdfs.TestQuota |
|   | org.apache.hadoop.hdfs.TestFileAppend4 |
|   | org.apache.hadoop.hdfs.TestReadWhileWriting |
|   | org.apache.hadoop.hdfs.TestLease |
|   | org.apache.hadoop.hdfs.TestHDFSServerPorts |
|   | org.apache.hadoop.hdfs.TestDFSUpgrade |
|   | org.apache.hadoop.hdfs.web.TestWebHDFS |
|   | org.apache.hadoop.hdfs.TestAppendSnapshotTruncate |
|   | org.apache.hadoop.hdfs.TestRollingUpgradeRollback |
|   | org.apache.hadoop.hdfs.TestMiniDFSCluster |
|   | org.apache.hadoop.hdfs.TestBlockReaderFactory |
|   | org.apache.hadoop.hdfs.TestHFlush |
|   | org.apache.hadoop.hdfs.TestEncryptedTransfer |
|   | org.apache.hadoop.hdfs.TestDFSShell |
|   | org.apache.hadoop.hdfs.TestDataTransferProtocol |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.

[jira] [Commented] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293378#comment-16293378
 ] 

genericqa commented on HDFS-12925:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 19m  
7s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs-client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdfs-client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 72 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
5s{color} | {color:red} The patch 1200 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  6m  
5s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} |

[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293395#comment-16293395
 ] 

genericqa commented on HDFS-12051:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 1216 unchanged - 19 fixed = 1218 total (was 1235) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
6s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Increment of volatile field 
org.apache.hadoop.hdfs.server.namenode.NameCache.size in 
org.apache.hadoop.hdfs.server.namenode.NameCache.put(byte[])  At 
NameCache.java:in org.apache.hadoop.hdfs.server.namenode.NameCache.put(byte[])  
At NameCache.java:[line 117] |
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902438/HDFS-12051.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af5d38c733e4 3.13.0-135-generic #1

[jira] [Created] (HDFS-12932) Confusing LOG message for block replication

2017-12-15 Thread Chao Sun (JIRA)
Chao Sun created HDFS-12932:
---

 Summary: Confusing LOG message for block replication
 Key: HDFS-12932
 URL: https://issues.apache.org/jira/browse/HDFS-12932
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 2.8.3
Reporter: Chao Sun
Assignee: Chao Sun
Priority: Minor


In our cluster we see large number of log messages such as the following:
{code}
2017-12-15 22:55:54,603 INFO 
org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication from 
3 to 3 for 
{code}

This is a little confusing since "from 3 to 3" is not "increasing". Digging 
into it, it seems related to this piece of code:
{code}
if (oldBR != -1) {
  if (oldBR > targetReplication) {
FSDirectory.LOG.info("Decreasing replication from {} to {} for {}",
 oldBR, targetReplication, iip.getPath());
  } else {
FSDirectory.LOG.info("Increasing replication from {} to {} for {}",
 oldBR, targetReplication, iip.getPath());
  }
}
{code}
Perhaps a {{oldBR == targetReplication}} case is missing?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-15 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293424#comment-16293424
 ] 

Arpit Agarwal commented on HDFS-12920:
--

[~djp], does presence of any unit-suffixed values in the config file cause this 
failure?

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-12-15 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293428#comment-16293428
 ] 

Misha Dmitriev commented on HDFS-12051:
---

There are some test failures again. They seem unrelated - some of them and/or 
some related fails also failed in the previous Hadoop Jenkins build.

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.ser

[jira] [Commented] (HDFS-3745) fsck prints that it's using KSSL even when it's in fact using SPNEGO for authentication

2017-12-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293509#comment-16293509
 ] 

genericqa commented on HDFS-3745:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} root: The patch generated 0 new + 361 unchanged - 1 
fixed = 361 total (was 362) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 53s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 58s{color} 
| {color:red} hadoop-mapreduce-client-hs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}266m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.common.TestJspHelper |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:y

[jira] [Updated] (HDFS-12925) Ozone: Container : Add key versioning support-2

2017-12-15 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12925:
--
Attachment: HDFS-12925-HDFS-7240.004.patch

Latest Jenkins build failed again, resubmit v003 patch as v004 patch to trigger 
another run.

> Ozone: Container : Add key versioning support-2
> ---
>
> Key: HDFS-12925
> URL: https://issues.apache.org/jira/browse/HDFS-12925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12925-HDFS-7240.001.patch, 
> HDFS-12925-HDFS-7240.002.patch, HDFS-12925-HDFS-7240.003.patch, 
> HDFS-12925-HDFS-7240.004.patch
>
>
> One component for versioning is assembling read IO vector, (please see 4.2 
> section of the [versioning design 
> doc|https://issues.apache.org/jira/secure/attachment/12877154/OzoneVersion.001.pdf]
>  under HDFS-12000 for the detail). This JIRA adds the util functions that 
> takes a list with blocks from different versions and properly generate the 
> read vector for the requested version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293524#comment-16293524
 ] 

Íñigo Goiri commented on HDFS-12904:


Actually, [~lukmajercak] and [~thinktaocs] went through the code and there is 
another {{sendBlock()}}:
{{BPOfferService#processCommandFromActive()}} -> {{DataNode#transferBlocks()}} 
-> {{DataNode#transferBlock()}} -> {{DataTransfer#run()}} finally calls 
{{BlockSender#sendBlock()}} without a throttler.
This will start a {{DataXceiver}} in the other side which will be throttled but 
we should also throttle the one that sends.
I don't see a proper way to distinguish those.
In any case, we may want to throttle the one in {{DataTransfer#run()}}.
Thoughts?


> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12904:
---
Attachment: HDFS-12904.002.patch

> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch, 
> HDFS-12904.002.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas reopened HDFS-12903:
--

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293542#comment-16293542
 ] 

Chris Douglas commented on HDFS-12903:
--

This reappears in spotbugs 3.1.1. It's spurious, as 
{{IOUtils::cleanupWithLogger}} will safely close the stream. Let's just 
suppress it.

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12929) There is error message when hdfs dfsadmin is run against a ViewFS config

2017-12-15 Thread KaiXinXIaoLei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KaiXinXIaoLei resolved HDFS-12929.
--
Resolution: Duplicate

https://issues.apache.org/jira/browse/HDFS-12292

> There is  error message when hdfs dfsadmin is run against a ViewFS config
> -
>
> Key: HDFS-12929
> URL: https://issues.apache.org/jira/browse/HDFS-12929
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: KaiXinXIaoLei
>
> In viewfs config, i run "hdfs dfsadmin -safemode get", there is error:
> {noformat}
> safemode: FileSystem viewfs://XX/ is not an HDFS file system
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12903:
-
Attachment: HDFS-12903-HDFS-9806.002.patch

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293547#comment-16293547
 ] 

Chris Douglas commented on HDFS-12903:
--

Checked locally, this suppresses the warning correctly. Reverted the old patch 
and pushed this.

Thanks, [~virajith]

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293550#comment-16293550
 ] 

Chris Douglas commented on HDFS-12903:
--

Checked locally, this suppresses the warning correctly. Reverted the old patch 
and pushed this.

Thanks, [~virajith]

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12903:
-
Comment: was deleted

(was: Checked locally, this suppresses the warning correctly. Reverted the old 
patch and pushed this.

Thanks, [~virajith])

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved HDFS-12903.
--
Resolution: Fixed

> [READ] Fix closing streams in ImageWriter
> -
>
> Key: HDFS-12903
> URL: https://issues.apache.org/jira/browse/HDFS-12903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12903-HDFS-9806.001.patch, 
> HDFS-12903-HDFS-9806.002.patch
>
>
> HDFS-12894 showed a FindBug in HDFS-9806. This seems related to HDFS-12881 
> when using {{IOUtils.cleanupWithLogger()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-15 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293555#comment-16293555
 ] 

Junping Du commented on HDFS-12920:
---

bq. An alternative to reverting the change is to deprecate the old property and 
create a new one that understands time units, as was raised in that JIRA. If 
specifying units breaks rolling upgrades, then what is the point of adding 
units, ever?
That is also a possible approach. We can either keep the default value for 
existing properties or start to using new properties and deprecated previous 
properties.

bq. So another workaround is to have at least two tarballs on HDFS, one that 
uses 3.x and one that uses 2.x. The 3.x site configs request the 3.x tarball 
and the 2.x site configs request the 2.x tarball. When the job submitter client 
upgrades to use 3.x jars, it can also upgrade to 3.x configs to start running 
the job with 3.x as well.
As we discussed offline, if we explicitly packaging these configs into tarball, 
then we may not hitting this issue as different version tar ball and 
configuration will match each other in the end. However, some users may not 
follow this practice before and after. Also, managing configurations in 
different places (cluster setup, MR tar ball, job submission, etc.) is also 
complicated. May be it is more easier to fix issue here instead of tarball 
configuration?

bq. Junping Du, does presence of any unit-suffixed values in the config file 
cause this failure?
Hi [~arpitagarwal], the unit-suffixed values is by default (in 
hdfs-default.xml) now in 3.x. Job submit against old version MR tar ball will 
load new default values provided by new hadoop deployment which will get stuck 
with exception I put above.

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands

2017-12-15 Thread KaiXinXIaoLei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293558#comment-16293558
 ] 

KaiXinXIaoLei commented on HDFS-12292:
--

I also meet this problem by running "hdfs dfsadmin -safemode get". Is this 
patch useful? 

> Federation: Support viewfs:// schema path for DfsAdmin commands
> ---
>
> Key: HDFS-12292
> URL: https://issues.apache.org/jira/browse/HDFS-12292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Mikhail Erofeev
>Assignee: Mikhail Erofeev
> Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, 
> HDFS-12292-004.patch, HDFS-12292.patch
>
>
> Motivation:
> As of now, clients need to specify a nameservice when a cluster is federated, 
> otherwise, the exception is fired:
> {code}
> hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # with fs.defaultFS = viewfs://vfs-root/
> hdfs dfsadmin -setQuota 10 vfs-root/user/uname
> setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system
> # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432
> hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname
> {code}
> This creates inconvenience, inability to rely on fs.defaultFS and forces to 
> create client-side mappings for management scripts
> Implementation:
> PathData that is passed to commands should be resolved to its actual 
> FileSystem
> Result:
> ViewFS will be resolved to the actual HDFS file system



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-9806:

Release Note: Provided storage allows data stored outside HDFS to be mapped 
to and addressed from HDFS. It builds on heterogeneous storage by introducing a 
new storage type, PROVIDED, to the set of media in a datanode. Clients 
accessing data in PROVIDED storages can cache replicas in local media, enforce 
HDFS invariants (e.g., security, quotas), and address more data than the 
cluster could persist in the storage attached to DataNodes.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293569#comment-16293569
 ] 

Chris Douglas commented on HDFS-9806:
-

The merge vote [passed|https://s.apache.org/tqLt]

Merged to trunk. Thanks [~virajith], [~ehiggs], and [~Thomas Demoor]!

Thanks also to [~elgoiri], [~mackrorysd], [~ste...@apache.org], [~eddyxu], 
[~anu], [~drankye], and [~umamaheswararao] for help with the design, testing, 
and review of this feature.

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Fix For: 3.1.0
>
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-15 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-9806:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.1.0
Target Version/s: 3.1.0
  Status: Resolved  (was: Patch Available)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Fix For: 3.1.0
>
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch, HDFS-9806.003.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11190) [READ] Namenode support for data stored in external stores.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293579#comment-16293579
 ] 

Hudson commented on HDFS-11190:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11190. [READ] Namenode support for data stored in external stores. 
(cdouglas: rev d65df0f27395792c6e25f5e03b6ba1765e2ba925)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LocatedBlockBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java


> [READ] Namenode support for data stored in external stores.
> ---
>
> Key: HDFS-11190
> URL: https://issues.apache.org/jira/browse/HDFS-11190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11190-HDFS-9806.001.patch, 
> HDFS-11190-HDFS-9806.002.patch, HDFS-11190-HDFS-9806.003.patch, 
> HDFS-11190-HDFS-9806.004.patch
>
>
> The goal of this JIRA is to enable the Namenode to know about blocks that are 
> in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10675) [READ] Datanode support to read from external stores.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293577#comment-16293577
 ] 

Hudson commented on HDFS-10675:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-10675. Datanode support to read from external stores. (cdouglas: rev 
b668eb91556b8c85c2b4925808ccb1f769031c20)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStartupVersions.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockAlias.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageCompression.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/or

[jira] [Commented] (HDFS-12093) [READ] Share remoteFS between ProvidedReplica instances.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293587#comment-16293587
 ] 

Hudson commented on HDFS-12093:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12093. [READ] Share remoteFS between ProvidedReplica instances. (cdouglas: 
rev 2407c9b93aabb021b76c802b19c928fb6cbb0a85)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java


> [READ] Share remoteFS between ProvidedReplica instances.
> 
>
> Key: HDFS-12093
> URL: https://issues.apache.org/jira/browse/HDFS-12093
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12093-HDFS-9806.001.patch, 
> HDFS-12093-HDFS-9806.002.patch
>
>
> When a Datanode comes online using Provided storage, it fills the 
> {{ReplicaMap}} with the known replicas. With Provided Storage, this includes 
> {{ProvidedReplica}} instances. Each of these objects, in their constructor, 
> will construct an FileSystem using the Service Provider. This can result in 
> contacting the remote file system and checking that the credentials are 
> correct and that the data is there. For large systems this is a prohibitively 
> expensive operation to perform per replica.
> Instead, the {{ProvidedVolumeImpl}} should own the reference to the 
> {{remoteFS}} and should share it with the {{ProvidedReplica}} objects on 
> their creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11703) [READ] Tests for ProvidedStorageMap

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293582#comment-16293582
 ] 

Hudson commented on HDFS-11703:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11703. [READ] Tests for ProvidedStorageMap (cdouglas: rev 
89b9faf5294c93f66ba7bbe08f5ab9083ecb5d72)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java


> [READ] Tests for ProvidedStorageMap
> ---
>
> Key: HDFS-11703
> URL: https://issues.apache.org/jira/browse/HDFS-11703
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11703-HDFS-9806.001.patch, 
> HDFS-11703-HDFS-9806.002.patch
>
>
> Add tests for the {{ProvidedStorageMap}} in the namenode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12091) [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to the correct external storage

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293586#comment-16293586
 ] 

Hudson commented on HDFS-12091:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12091. [READ] Check that the replicas served from a (cdouglas: rev 
663b3c08b131ea2db693e1a5d2f5da98242fa854)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
HDFS-12289. [READ] HDFS-12091 breaks the tests for provided block reads 
(cdouglas: rev aca023b72cdb325ca66d196443218f6107efa1ca)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to 
> the correct external storage
> --
>
> Key: HDFS-12091
> URL: https://issues.apache.org/jira/browse/HDFS-12091
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12091-HDFS-9806.001.patch, 
> HDFS-12091-HDFS-9806.002.patch
>
>
> A {{ProvidedVolumeImpl}} can only serve blocks that "belong" to it. i.e., for 
> blocks served from a {{ProvidedVolumeImpl}}, the {{baseURI}} of the 
> {{ProvidedVolumeImpl}} should be a prefix of the URI of the blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10706) [READ] Add tool generating FSImage from external store

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293578#comment-16293578
 ] 

Hudson commented on HDFS-10706:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-10706. [READ] Add tool generating FSImage from external store (cdouglas: 
rev 8da3a6e314609f9124bd9979cd09cddbc2a10d36)
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSingleUGIResolver.java
* (add) hadoop-tools/hadoop-fs2img/pom.xml
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFixedBlockResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsUGIResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/package-info.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
* (edit) hadoop-tools/pom.xml
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/BlockResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRandomTreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
* (add) hadoop-tools/hadoop-fs2img/src/test/resources/log4j.properties
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java
* (add) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockMultiReplicaResolver.java
* (edit) hadoop-tools/hadoop-tools-dist/pom.xml


> [READ] Add tool generating FSImage from external store
> --
>
> Key: HDFS-10706
> URL: https://issues.apache.org/jira/browse/HDFS-10706
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, tools
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HDFS-10706-HDFS-9806.002.patch, 
> HDFS-10706-HDFS-9806.003.patch, HDFS-10706-HDFS-9806.004.patch, 
> HDFS-10706-HDFS-9806.005.patch, HDFS-10706-HDFS-9806.006.patch, 
> HDFS-10706.001.patch, HDFS-10706.002.patch
>
>
> To experiment with provided storage, this provides a tool to map an external 
> namespace to an FSImage/NN storage. By loading it in a NN, one can access the 
> remote FS using HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11653) [READ] ProvidedReplica should return an InputStream that is bounded by its length

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293580#comment-16293580
 ] 

Hudson commented on HDFS-11653:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11653. [READ] ProvidedReplica should return an InputStream that is 
(cdouglas: rev 1108cb76917debf0a8541d5130e015883eb521af)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java


> [READ] ProvidedReplica should return an InputStream that is bounded by its 
> length
> -
>
> Key: HDFS-11653
> URL: https://issues.apache.org/jira/browse/HDFS-11653
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11653-HDFS-9806.001.patch, 
> HDFS-11653-HDFS-9806.002.patch
>
>
> {{ProvidedReplica#getDataInputStream}} should return an InputStream that is 
> bounded by {{ProvidedReplica#getBlockDataLength()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293590#comment-16293590
 ] 

Hudson commented on HDFS-12584:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12584. [READ] Fix errors in image generation tool from latest (cdouglas: 
rev 17052c4aff104cb02701bc1e8dc9cd73d1a325fb)
* (edit) hadoop-tools/hadoop-fs2img/pom.xml
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java


> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11791) [READ] Test for increasing replication of provided files.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293583#comment-16293583
 ] 

Hudson commented on HDFS-11791:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11791. [READ] Test for increasing replication of provided files. 
(cdouglas: rev 4851f06bc2df9d2cfc69fc7c4cecf7babcaa7728)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Test for increasing replication of provided files.
> -
>
> Key: HDFS-11791
> URL: https://issues.apache.org/jira/browse/HDFS-11791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11791-HDFS-9806.001.patch, 
> HDFS-11791-HDFS-9806.002.patch
>
>
> Test whether increasing the replication of a file with storage policy 
> {{PROVIDED}} replicates blocks locally (i.e., to {{DISK}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11792) [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293584#comment-16293584
 ] 

Hudson commented on HDFS-11792:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11792. [READ] Test cases for ProvidedVolumeDF and (cdouglas: rev 
55ade54b8ed36e18f028f478381a96e7b8c6be50)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java


> [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl
> 
>
> Key: HDFS-11792
> URL: https://issues.apache.org/jira/browse/HDFS-11792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11792-HDFS-9806.001.patch
>
>
> Test cases for {{ProvidedVolumeDF}} and {{ProviderBlockIteratorImpl}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11673) [READ] Handle failures of Datanode with PROVIDED storage

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293585#comment-16293585
 ] 

Hudson commented on HDFS-11673:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11673. [READ] Handle failures of Datanode with PROVIDED storage (cdouglas: 
rev 546b95f4843f3cbbbdf72d90d202cad551696082)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Handle failures of Datanode with PROVIDED storage
> 
>
> Key: HDFS-11673
> URL: https://issues.apache.org/jira/browse/HDFS-11673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11673-HDFS-9806.001.patch, 
> HDFS-11673-HDFS-9806.002.patch, HDFS-11673-HDFS-9806.003.patch, 
> HDFS-11673-HDFS-9806.004.patch, HDFS-11673-HDFS-9806.005.patch
>
>
> Blocks on {{PROVIDED}} storage should become unavailable if and only if all 
> Datanodes that are configured with {{PROVIDED}} storage become unavailable. 
> Even if one Datanode with {{PROVIDED}} storage is available, all blocks on 
> the {{PROVIDED}} storage should be accessible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11663) [READ] Fix NullPointerException in ProvidedBlocksBuilder

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293581#comment-16293581
 ] 

Hudson commented on HDFS-11663:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-11663. [READ] Fix NullPointerException in ProvidedBlocksBuilder (cdouglas: 
rev aa5ec85f7fd2dc6ac568a88716109bab8df8be19)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Fix NullPointerException in ProvidedBlocksBuilder
> 
>
> Key: HDFS-11663
> URL: https://issues.apache.org/jira/browse/HDFS-11663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11663-HDFS-9806.001.patch, 
> HDFS-11663-HDFS-9806.002.patch, HDFS-11663-HDFS-9806.003.patch
>
>
> When there are no Datanodes with PROVIDED storage, 
> {{ProvidedBlocksBuilder#build}} leads to a {{NullPointerException}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12289) [READ] HDFS-12091 breaks the tests for provided block reads

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293588#comment-16293588
 ] 

Hudson commented on HDFS-12289:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12289. [READ] HDFS-12091 breaks the tests for provided block reads 
(cdouglas: rev aca023b72cdb325ca66d196443218f6107efa1ca)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> [READ] HDFS-12091 breaks the tests for provided block reads
> ---
>
> Key: HDFS-12289
> URL: https://issues.apache.org/jira/browse/HDFS-12289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12289-HDFS-9806.001.patch
>
>
> In the tests within {{TestNameNodeProvidedImplementation}}, the files that 
> are supposed to belong to a provided volume are not located under the Storage 
> directory assigned to the volume in {{MiniDFSCluster}}. With HDFS-12091, this 
> isn't correct and thus, it breaks the tests. This JIRA is to fix the tests 
> under {{TestNameNodeProvidedImplementation}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293598#comment-16293598
 ] 

Hudson commented on HDFS-12777:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12777. [READ] Reduce memory and CPU footprint for PROVIDED volumes. 
(cdouglas: rev e1a28f95b8ffcb86300148f10a23b710f8388341)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java


> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch, 
> HDFS-12777-HDFS-9806.002.patch, HDFS-12777-HDFS-9806.003.patch, 
> HDFS-12777-HDFS-9806.004.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12671) [READ] Test NameNode restarts when PROVIDED is configured

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293594#comment-16293594
 ] 

Hudson commented on HDFS-12671:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12671. [READ] Test NameNode restarts when PROVIDED is configured 
(cdouglas: rev c293cc8e9b032d2c573340725ef8ecc15d49430d)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java


> [READ] Test NameNode restarts when PROVIDED is configured
> -
>
> Key: HDFS-12671
> URL: https://issues.apache.org/jira/browse/HDFS-12671
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12671-HDFS-9806.001.patch, 
> HDFS-12671-HDFS-9806.002.patch, HDFS-12671-HDFS-9806.003.patch, 
> HDFS-12671-HDFS-9806.004.patch
>
>
> Add test case to ensure namenode restarts can be handled with provided 
> storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293593#comment-16293593
 ] 

Hudson commented on HDFS-12607:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13391/])
HDFS-12607. [READ] Even one dead datanode with PROVIDED storage results 
(cdouglas: rev 71d0a825711387fe06396323a9ca6a5af0ade415)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12607-HDFS-9806.001.patch, 
> HDFS-12607-HDFS-9806.002.patch, HDFS-12607-HDFS-9806.003.patch, 
> HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >