[ https://issues.apache.org/jira/browse/HDFS-3579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13426262#comment-13426262 ]
Andy Isaacson commented on HDFS-3579: ------------------------------------- {code} @@ -870,15 +812,12 @@ int hdfsCloseFile(hdfsFS fs, hdfsFile file) - //Parameters - jobject jStream = (jobject)(file ? file->file : NULL); ... + (*env)->DeleteGlobalRef(env, file->file); free(file); - (*env)->DeleteGlobalRef(env, jStream); {code} Let's preserve the interface that {{hdfsFile file}} can be NULL without causing SEGV. Just toss in a {{if (file == NULL) return -1;}} near the top. {code} + jthr = invokeMethod(env, &jVal, INSTANCE, jInputStream, HADOOP_ISTRM, "read", "([B)I", jbRarray); ... + if (jVal.i < 0) { + // EOF + destroyLocalReference(env, jbRarray); + return 0; + } else if (jVal.i == 0) { + destroyLocalReference(env, jbRarray); + errno = EINTR; + return -1; {code} Is this correct? FSDataInputStream#read returns -1 on EOF and 0 on EINTR? That's special. I see docs for the -1 case, but I don't see anywhere that the 0 could come from? {code} tSize hdfsPread(hdfsFS fs, hdfsFile f, tOffset position, void* buffer, tSize length) { - // JAVA EQUIVALENT: - // byte [] bR = new byte[length]; - // fis.read(pos, bR, 0, length); {code} I find these JAVA EQUIVALENT comments to be very helpful, could we keep them around? so long as they're accurate, I mean. If they're misleading then deleting is correct. {code} + if (jthr) { + errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, + "hdfsTell: org.apache.hadoop.fs.%s::getPos", + ((f->type == INPUT) ? "FSDataInputStream" : + "FSDataOutputStream")); {code} Please use {{interface}} here rather than recapitulating its ternary. more review to come ... two-thirds done now ... > libhdfs: fix exception handling > ------------------------------- > > Key: HDFS-3579 > URL: https://issues.apache.org/jira/browse/HDFS-3579 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs > Affects Versions: 2.0.1-alpha > Reporter: Colin Patrick McCabe > Assignee: Colin Patrick McCabe > Attachments: HDFS-3579.004.patch, HDFS-3579.005.patch, > HDFS-3579.006.patch > > > libhdfs does not consistently handle exceptions. Sometimes we don't free the > memory associated with them (memory leak). Sometimes we invoke JNI functions > that are not supposed to be invoked when an exception is active. > Running a libhdfs test program with -Xcheck:jni shows the latter problem > clearly: > {code} > WARNING in native method: JNI call made with exception pending > WARNING in native method: JNI call made with exception pending > WARNING in native method: JNI call made with exception pending > WARNING in native method: JNI call made with exception pending > WARNING in native method: JNI call made with exception pending > Exception in thread "main" java.io.IOException: ... > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira