[ 
https://issues.apache.org/jira/browse/HDFS-3100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3100:
-----------------------------------------

    Description: 
STEP:
1, deploy a single node hdfs  0.23.1 cluster and configure hdfs as:
A) enable webhdfs
B) enable append
C) disable permissions
2, start hdfs
3, run the test script as attached

RESULT:
expected: a file named testFile should be created and populated with 32K * 5000 
zeros, HDFS should be OK.
I got: script cannot be finished, file has been created but not be populated as 
expected, actually append operation failed.

Datanode log shows that, blockscaner report a bad replica and nanenode decide 
to delete it. Since it is a single node cluster, append fail. It makes no sense 
that the script failed every time.

Datanode and Namenode logs are attached.

  was:

STEP:
1, deploy a single node hdfs  0.23.1 cluster and configure hdfs as:
A) enable webhdfs
B) enable append
C) disable permissions
2, start hdfs
3, run the test script as attached

RESULT:
expected: a file named testFile should be created and populated with 32K * 5000 
zeros, HDFS should be OK.
I got: script cannot be finished, file has been created but not be populated as 
expected, actually append operation failed.

Datanode log shows that, blockscaner report a bad replica and nanenode decide 
to delete it. Since it is a single node cluster, append fail. It makes no sense 
that the script failed every time.

Datanode and Namenode logs are attached.

       Assignee: Brandon Li  (was: Tsz Wo (Nicholas), SZE)
    
> failed to append data using webhdfs
> -----------------------------------
>
>                 Key: HDFS-3100
>                 URL: https://issues.apache.org/jira/browse/HDFS-3100
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.23.1
>            Reporter: Zhanwei.Wang
>            Assignee: Brandon Li
>         Attachments: hadoop-wangzw-datanode-ubuntu.log, 
> hadoop-wangzw-namenode-ubuntu.log, test.sh, testAppend.patch
>
>
> STEP:
> 1, deploy a single node hdfs  0.23.1 cluster and configure hdfs as:
> A) enable webhdfs
> B) enable append
> C) disable permissions
> 2, start hdfs
> 3, run the test script as attached
> RESULT:
> expected: a file named testFile should be created and populated with 32K * 
> 5000 zeros, HDFS should be OK.
> I got: script cannot be finished, file has been created but not be populated 
> as expected, actually append operation failed.
> Datanode log shows that, blockscaner report a bad replica and nanenode decide 
> to delete it. Since it is a single node cluster, append fail. It makes no 
> sense that the script failed every time.
> Datanode and Namenode logs are attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to