[ 
https://issues.apache.org/jira/browse/HDFS-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13409701#comment-13409701
 ] 

Daryn Sharp commented on HDFS-3577:
-----------------------------------

bq. The file size was 1MB in the test but the block size was only 1kB. 
Therefore, it created a lot of local files and failed with 
"java.net.SocketException: Too many open files".

Does this mean there's a fd leak?  Or at least a leak during the create 
request?  If so, is the test at fault?
                
> webHdfsFileSystem fails to read files with chunked transfer encoding
> --------------------------------------------------------------------
>
>                 Key: HDFS-3577
>                 URL: https://issues.apache.org/jira/browse/HDFS-3577
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 2.0.0-alpha
>            Reporter: Alejandro Abdelnur
>            Assignee: Tsz Wo (Nicholas), SZE
>            Priority: Blocker
>         Attachments: h3577_20120705.patch, h3577_20120708.patch
>
>
> If reading a file large enough for which the httpserver running 
> webhdfs/httpfs uses chunked transfer encoding (more than 24K in the case of 
> webhdfs), then the WebHdfsFileSystem client fails with an IOException with 
> message *Content-Length header is missing*.
> It looks like WebHdfsFileSystem is delegating opening of the inputstream to 
> *ByteRangeInputStream.URLOpener* class, which checks for the *Content-Length* 
> header, but when using chunked transfer encoding the *Content-Length* header 
> is not present and  the *URLOpener.openInputStream()* method thrown an 
> exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to