[ 
https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15345061#comment-15345061
 ] 

James Clampffer commented on HDFS-10543:
----------------------------------------

New patch looks good other than two minor issues.

The extra warning "JDK v1.8.0_91 generated 1 new + 29 unchanged - 0 fixed = 30 
total (was 29)" that got introduced in this patch was a signed < unsigned 
comparison.  Those have started to break things (HDFS-10554), could you take a 
look at that?  Good to know the compiler is picking up on this in some places.  
The rest of those warnings are coming from our third-party libraries and I'm 
hoping to clean those up soon.
{code}
while (*nbyte != 0 && offset < file_info_->file_length_) {
{code}

I hate to complain about code format, but could you make sure you use two 
spaces for tabs?  No rationale for this other than keeping consistent with the 
rest of the library.
{code}
   if(!stat.ok()) {
-    return stat;
+      return stat;
   }
{code}


> hdfsRead read stops at block boundary
> -------------------------------------
>
>                 Key: HDFS-10543
>                 URL: https://issues.apache.org/jira/browse/HDFS-10543
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: Xiaowei Zhu
>            Assignee: James Clampffer
>         Attachments: HDFS-10543.HDFS-8707.000.patch, 
> HDFS-10543.HDFS-8707.001.patch, HDFS-10543.HDFS-8707.002.patch, 
> HDFS-10543.HDFS-8707.003.patch
>
>
> Reproducer:
> char *buf2 = new char[file_info->mSize];
>       memset(buf2, 0, (size_t)file_info->mSize);
>       int ret = hdfsRead(fs, file, buf2, file_info->mSize);
>       delete [] buf2;
>       if(ret != file_info->mSize) {
>         std::stringstream ss;
>         ss << "tried to read " << file_info->mSize << " bytes. but read " << 
> ret << " bytes";
>         ReportError(ss.str());
>         hdfsCloseFile(fs, file);
>         continue;
>       }
> When it runs with a file ~1.4GB large, it will return an error like "tried to 
> read 1468888890 bytes. but read 134217728 bytes". The HDFS cluster it runs 
> against has a block size of 134217728 bytes. So it seems hdfsRead will stop 
> at a block boundary. Looks like a regression. We should add retry to continue 
> reading cross blocks in case of files w/ multiple blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to