[ 
https://issues.apache.org/jira/browse/HDFS-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16089031#comment-16089031
 ] 

Surendra Singh Lilhore commented on HDFS-12146:
-----------------------------------------------

{{hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier}} failed 
because of {{out of space}} issue

{noformat}
2017-07-15 10:55:00,808 [DataXceiver for client /127.0.0.1:47222 [Replacing 
block BP-1486486435-172.17.0.2-1500116097405:blk_1073741827_1003 from 
c7dfc551-78d1-4f31-8eb2-db9c223be27d]] ERROR datanode.DataNode 
(DataXceiver.java:run(323)) - 127.0.0.1:57555:DataXceiver error processing 
REPLACE_BLOCK operation  src: /127.0.0.1:47222 dst: /127.0.0.1:57555
org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Out of space: The 
volume with the most available space (=0 B) is less than the block size (=1024 
B).
{noformat}

> [SPS] : Fix 
> TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks
> -------------------------------------------------------------------------------------------
>
>                 Key: HDFS-12146
>                 URL: https://issues.apache.org/jira/browse/HDFS-12146
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: namenode
>            Reporter: Surendra Singh Lilhore
>            Assignee: Surendra Singh Lilhore
>         Attachments: HDFS-12146-HDFS-10285.001.patch
>
>
> TestStoragePolicySatisfierWithStripedFile#testSPSWhenFileHasLowRedundancyBlocks
>  failed in many build with port bind exception. I feel we no need to restart 
> datanodes on same port, just we checking the block redundancy scenario.. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to