[ 
https://issues.apache.org/jira/browse/HDFS-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16933784#comment-16933784
 ] 

CR Hota commented on HDFS-14851:
--------------------------------

[Íñigo 
Goiri|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=elgoiri] 
Thanks for tagging me.

[Danny Becker|http://jira/secure/ViewProfile.jspa?name=dannytbecker]  Thanks 
for working on this.

Yes, we do use webhdfs but haven't come across a scenario like this yet. The 
change looks quite expensive performance wise for all calls to fix response 
code. Iterating through all blocks to find what is corrupted or not looks 
expensive especially when 1048576 is the limit of blocks per file. We may want 
to rather expose an API through InputStream that exposes List of all corrupted 
blocks (just like it exposes getAllBlocks), if the size of this list is 
positive, this web call can throw BlockMissingException.

Cc [~xkrogen] [~jojochuang]

> WebHdfs Returns 200 Status Code for Open of Files with Corrupt Blocks
> ---------------------------------------------------------------------
>
>                 Key: HDFS-14851
>                 URL: https://issues.apache.org/jira/browse/HDFS-14851
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>            Reporter: Danny Becker
>            Assignee: Danny Becker
>            Priority: Minor
>         Attachments: HDFS-14851.001.patch
>
>
> WebHdfs returns 200 status code for Open operations on files with missing or 
> corrupt blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to