[ https://issues.apache.org/jira/browse/HDFS-10177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yongtao Yang updated HDFS-10177: -------------------------------- Description: in the implementation of {{hdfsRead()}}, {code:title=hdfsRead.c} tSize hdfsRead(hdfsFS fs, hdfsFile f, void* buffer, tSize length) { ...... jint noReadBytes = length; ...... jthr = invokeMethod(env, &jVal, INSTANCE, jInputStream, HADOOP_ISTRM, "read", "([B)I", jbRarray); if (jthr) { destroyLocalReference(env, jbRarray); errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, "hdfsRead: FSDataInputStream#read"); return -1; } if (jVal.i < 0) { // EOF destroyLocalReference(env, jbRarray); return 0; } else if (jVal.i == 0) { destroyLocalReference(env, jbRarray); errno = EINTR; return -1; } (*env)->GetByteArrayRegion(env, jbRarray, 0, noReadBytes, buffer); {code} {{noReadBytes}} is initialized to {{length}}, but it should be set {{jav.i}} before the bytes are copied to the native {{buffer}}. was: in the implementation of {{hdfsRead()}}, <div class="code"> tSize hdfsRead(hdfsFS fs, hdfsFile f, void* buffer, tSize length) { ...... jint noReadBytes = length; ...... jthr = invokeMethod(env, &jVal, INSTANCE, jInputStream, HADOOP_ISTRM, "read", "([B)I", jbRarray); if (jthr) { destroyLocalReference(env, jbRarray); errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, "hdfsRead: FSDataInputStream#read"); return -1; } if (jVal.i < 0) { // EOF destroyLocalReference(env, jbRarray); return 0; } else if (jVal.i == 0) { destroyLocalReference(env, jbRarray); errno = EINTR; return -1; } (*env)->GetByteArrayRegion(env, jbRarray, 0, noReadBytes, buffer); </div> {{noReadBytes}} is initialized to {{length}}, but it should be set {{jav.i}} before the bytes are copied to the native {{buffer}}. > a wrong number of the bytes read are copied into native buffer in hdfsRead() > ---------------------------------------------------------------------------- > > Key: HDFS-10177 > URL: https://issues.apache.org/jira/browse/HDFS-10177 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs > Affects Versions: 2.6.0, 2.7.2, 2.6.2, 2.6.3, 2.6.4 > Environment: RHEL 6.3, 64-bit > Oralce JDK 1.7.0_55 > Reporter: Yongtao Yang > > in the implementation of {{hdfsRead()}}, > {code:title=hdfsRead.c} > tSize hdfsRead(hdfsFS fs, hdfsFile f, void* buffer, tSize length) > { > ...... > jint noReadBytes = length; > ...... > jthr = invokeMethod(env, &jVal, INSTANCE, jInputStream, HADOOP_ISTRM, > "read", "([B)I", jbRarray); > if (jthr) { > destroyLocalReference(env, jbRarray); > errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, > "hdfsRead: FSDataInputStream#read"); > return -1; > } > if (jVal.i < 0) { > // EOF > destroyLocalReference(env, jbRarray); > return 0; > } else if (jVal.i == 0) { > destroyLocalReference(env, jbRarray); > errno = EINTR; > return -1; > } > (*env)->GetByteArrayRegion(env, jbRarray, 0, noReadBytes, buffer); > {code} > {{noReadBytes}} is initialized to {{length}}, but it should be set {{jav.i}} > before the bytes are copied to the native {{buffer}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)