[jira] [Updated] (HADOOP-10479) Fix new findbugs warnings in hadoop-minikdc
[ https://issues.apache.org/jira/browse/HADOOP-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10479: -- Status: Patch Available (was: In Progress) > Fix new findbugs warnings in hadoop-minikdc > --- > > Key: HADOOP-10479 > URL: https://issues.apache.org/jira/browse/HADOOP-10479 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > Attachments: HADOOP-10479.1.patch.txt > > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-minikdc > --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.initKDCServer(): new > java.io.InputStreamReader(InputStream) ["org.apache.hadoop.minikdc.MiniKdc"] > At MiniKdc.java:[lines 112-557] > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.main(String[]): new > java.io.FileReader(File) ["org.apache.hadoop.minikdc.MiniKdc"] At > MiniKdc.java:[lines 112-557] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10481) Fix new findbugs warnings in hadoop-auth
[ https://issues.apache.org/jira/browse/HADOOP-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10481: -- Attachment: HADOOP-10481.2.patch.txt Feedback addressed. New patch attached for review. > Fix new findbugs warnings in hadoop-auth > > > Key: HADOOP-10481 > URL: https://issues.apache.org/jira/browse/HADOOP-10481 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > Attachments: HADOOP-10481.1.patch.txt, HADOOP-10481.2.patch.txt > > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-auth --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(FilterConfig): > String.getBytes() > ["org.apache.hadoop.security.authentication.server.AuthenticationFilter"] At > AuthenticationFilter.java:[lines 76-455] > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.util.Signer.computeSignature(String): > String.getBytes() ["org.apache.hadoop.security.authentication.util.Signer"] > At Signer.java:[lines 34-96] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10479) Fix new findbugs warnings in hadoop-minikdc
[ https://issues.apache.org/jira/browse/HADOOP-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13997312#comment-13997312 ] Swarnim Kulkarni commented on HADOOP-10479: --- This is now up for review. > Fix new findbugs warnings in hadoop-minikdc > --- > > Key: HADOOP-10479 > URL: https://issues.apache.org/jira/browse/HADOOP-10479 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > Attachments: HADOOP-10479.1.patch.txt > > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-minikdc > --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.initKDCServer(): new > java.io.InputStreamReader(InputStream) ["org.apache.hadoop.minikdc.MiniKdc"] > At MiniKdc.java:[lines 112-557] > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.main(String[]): new > java.io.FileReader(File) ["org.apache.hadoop.minikdc.MiniKdc"] At > MiniKdc.java:[lines 112-557] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10479) Fix new findbugs warnings in hadoop-minikdc
[ https://issues.apache.org/jira/browse/HADOOP-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10479: -- Attachment: HADOOP-10479.1.patch.txt Patch attached that fixes findbugs warnings {noformat} [INFO] [INFO] --- findbugs-maven-plugin:2.5.3:findbugs (default-cli) @ hadoop-minikdc --- [INFO] Fork Value is true [INFO] Done FindBugs Analysis [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 24.729s [INFO] Finished at: Wed May 14 00:56:48 CDT 2014 [INFO] Final Memory: 25M/81M [INFO] {noformat} > Fix new findbugs warnings in hadoop-minikdc > --- > > Key: HADOOP-10479 > URL: https://issues.apache.org/jira/browse/HADOOP-10479 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > Attachments: HADOOP-10479.1.patch.txt > > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-minikdc > --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.initKDCServer(): new > java.io.InputStreamReader(InputStream) ["org.apache.hadoop.minikdc.MiniKdc"] > At MiniKdc.java:[lines 112-557] > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.main(String[]): new > java.io.FileReader(File) ["org.apache.hadoop.minikdc.MiniKdc"] At > MiniKdc.java:[lines 112-557] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Work started] (HADOOP-10479) Fix new findbugs warnings in hadoop-minikdc
[ https://issues.apache.org/jira/browse/HADOOP-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-10479 started by Swarnim Kulkarni. > Fix new findbugs warnings in hadoop-minikdc > --- > > Key: HADOOP-10479 > URL: https://issues.apache.org/jira/browse/HADOOP-10479 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-minikdc > --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.initKDCServer(): new > java.io.InputStreamReader(InputStream) ["org.apache.hadoop.minikdc.MiniKdc"] > At MiniKdc.java:[lines 112-557] > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.main(String[]): new > java.io.FileReader(File) ["org.apache.hadoop.minikdc.MiniKdc"] At > MiniKdc.java:[lines 112-557] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10481) Fix new findbugs warnings in hadoop-auth
[ https://issues.apache.org/jira/browse/HADOOP-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10481: -- Status: Patch Available (was: Open) > Fix new findbugs warnings in hadoop-auth > > > Key: HADOOP-10481 > URL: https://issues.apache.org/jira/browse/HADOOP-10481 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > Attachments: HADOOP-10481.1.patch.txt, HADOOP-10481.2.patch.txt > > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-auth --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(FilterConfig): > String.getBytes() > ["org.apache.hadoop.security.authentication.server.AuthenticationFilter"] At > AuthenticationFilter.java:[lines 76-455] > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.util.Signer.computeSignature(String): > String.getBytes() ["org.apache.hadoop.security.authentication.util.Signer"] > At Signer.java:[lines 34-96] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10576) Fail build on new findbugs warnings
[ https://issues.apache.org/jira/browse/HADOOP-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990039#comment-13990039 ] Swarnim Kulkarni commented on HADOOP-10576: --- Yes it does. Thanks Chris. > Fail build on new findbugs warnings > --- > > Key: HADOOP-10576 > URL: https://issues.apache.org/jira/browse/HADOOP-10576 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Swarnim Kulkarni >Assignee: Swarnim Kulkarni > > Issues like HADOOP-10477 seem like an unnecessary technical debt for > developers I think which can be easily fixed by simply failing the build > anytime a code change introduces a new findbugs warning. This might help keep > the code base cleaner and avoid us going back to fix those warnings. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
[ https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989869#comment-13989869 ] Swarnim Kulkarni commented on HADOOP-10541: --- Thanks Chris and sorry for that. Is there an eclipse formatter that I can use for my future contributions? I tried hunting one here [1] but I couldn't find one. Only one I found was here[2]. Is it ok to use that? Thanks again. [1] http://wiki.apache.org/hadoop/EclipseEnvironment [2] https://github.com/cloudera/blog-eclipse/blob/master/hadoop-format.xml > InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed > > > Key: HADOOP-10541 > URL: https://issues.apache.org/jira/browse/HADOOP-10541 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.4.0 >Reporter: Ted Yu >Assignee: Swarnim Kulkarni >Priority: Minor > Attachments: HADOOP-10541.1.patch.txt, HADOOP-10541.2.patch.txt, > HADOOP-10541.3.patch.txt, HADOOP-10541.4.patch > > > The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf > : > {code} > InputStream is = cl.getResourceAsStream("minikdc.ldiff"); > ... > is = cl.getResourceAsStream("minikdc-krb5.conf"); > {code} > Before the second assignment, is should be closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10576) Fail build on new findbugs warnings
Swarnim Kulkarni created HADOOP-10576: - Summary: Fail build on new findbugs warnings Key: HADOOP-10576 URL: https://issues.apache.org/jira/browse/HADOOP-10576 Project: Hadoop Common Issue Type: Improvement Components: build Reporter: Swarnim Kulkarni Assignee: Swarnim Kulkarni Issues like HADOOP-10477 seem like an unnecessary technical debt for developers I think we can be easily fixed by simply failing the build anytime a code change introduces a new findbugs warning. This might help keep the code base cleaner and avoid us going back to fix those warnings. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10576) Fail build on new findbugs warnings
[ https://issues.apache.org/jira/browse/HADOOP-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10576: -- Description: Issues like HADOOP-10477 seem like an unnecessary technical debt for developers I think which can be easily fixed by simply failing the build anytime a code change introduces a new findbugs warning. This might help keep the code base cleaner and avoid us going back to fix those warnings. (was: Issues like HADOOP-10477 seem like an unnecessary technical debt for developers I think we can be easily fixed by simply failing the build anytime a code change introduces a new findbugs warning. This might help keep the code base cleaner and avoid us going back to fix those warnings.) > Fail build on new findbugs warnings > --- > > Key: HADOOP-10576 > URL: https://issues.apache.org/jira/browse/HADOOP-10576 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Swarnim Kulkarni >Assignee: Swarnim Kulkarni > > Issues like HADOOP-10477 seem like an unnecessary technical debt for > developers I think which can be easily fixed by simply failing the build > anytime a code change introduces a new findbugs warning. This might help keep > the code base cleaner and avoid us going back to fix those warnings. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10517) InputStream is not closed in two methods of JarFinder
[ https://issues.apache.org/jira/browse/HADOOP-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989564#comment-13989564 ] Swarnim Kulkarni commented on HADOOP-10517: --- I might be mistaken but isn't the closeEntry closes just the current entry but still leaves the stream open. I would have thought that we would still need to call the close() on the outputstream to release all the resources that it is holding on to. > InputStream is not closed in two methods of JarFinder > - > > Key: HADOOP-10517 > URL: https://issues.apache.org/jira/browse/HADOOP-10517 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HADOOP-10517.1.patch.txt, hadoop-10517-v1.txt, > hadoop-10517-v2.txt > > > JarFinder#jarDir() and JarFinder#zipDir() have such code: > {code} > InputStream is = new FileInputStream(f); > copyToZipStream(is, anEntry, zos); > {code} > The InputStream is closed in copyToZipStream() but should be enclosed in > finally block. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10517) InputStream is not closed in two methods of JarFinder
[ https://issues.apache.org/jira/browse/HADOOP-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989281#comment-13989281 ] Swarnim Kulkarni commented on HADOOP-10517: --- It looks like there is a bug which we can fix with this JIRA. The closeEntry is called twice, Once here[1] and second here[2]. So if the "if" portion gets executed, we try to close the entry twice resulting in the error in the JUnit. I think we can remove the one inside "if" to fix the issue. [1] https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/JarFinder.java#L67 [2] https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/JarFinder.java#L72 > InputStream is not closed in two methods of JarFinder > - > > Key: HADOOP-10517 > URL: https://issues.apache.org/jira/browse/HADOOP-10517 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HADOOP-10517.1.patch.txt, hadoop-10517-v1.txt, > hadoop-10517-v2.txt > > > JarFinder#jarDir() and JarFinder#zipDir() have such code: > {code} > InputStream is = new FileInputStream(f); > copyToZipStream(is, anEntry, zos); > {code} > The InputStream is closed in copyToZipStream() but should be enclosed in > finally block. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10481) Fix new findbugs warnings in hadoop-auth
[ https://issues.apache.org/jira/browse/HADOOP-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10481: -- Attachment: HADOOP-10481.1.patch.txt Patch attached. Output with the patch {noformat} [INFO] --- findbugs-maven-plugin:2.5.3:findbugs (default-cli) @ hadoop-auth --- [INFO] Fork Value is true [INFO] Done FindBugs Analysis [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 25.571s [INFO] Finished at: Sun May 04 23:38:14 CDT 2014 [INFO] Final Memory: 18M/81M [INFO] {noformat} > Fix new findbugs warnings in hadoop-auth > > > Key: HADOOP-10481 > URL: https://issues.apache.org/jira/browse/HADOOP-10481 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > Attachments: HADOOP-10481.1.patch.txt > > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-auth --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(FilterConfig): > String.getBytes() > ["org.apache.hadoop.security.authentication.server.AuthenticationFilter"] At > AuthenticationFilter.java:[lines 76-455] > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.util.Signer.computeSignature(String): > String.getBytes() ["org.apache.hadoop.security.authentication.util.Signer"] > At Signer.java:[lines 34-96] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10481) Fix new findbugs warnings in hadoop-auth
[ https://issues.apache.org/jira/browse/HADOOP-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989278#comment-13989278 ] Swarnim Kulkarni commented on HADOOP-10481: --- This is ready for review. > Fix new findbugs warnings in hadoop-auth > > > Key: HADOOP-10481 > URL: https://issues.apache.org/jira/browse/HADOOP-10481 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > Attachments: HADOOP-10481.1.patch.txt > > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-auth --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(FilterConfig): > String.getBytes() > ["org.apache.hadoop.security.authentication.server.AuthenticationFilter"] At > AuthenticationFilter.java:[lines 76-455] > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.util.Signer.computeSignature(String): > String.getBytes() ["org.apache.hadoop.security.authentication.util.Signer"] > At Signer.java:[lines 34-96] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10517) InputStream is not closed in two methods of JarFinder
[ https://issues.apache.org/jira/browse/HADOOP-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989274#comment-13989274 ] Swarnim Kulkarni commented on HADOOP-10517: --- That's very interesting because the only thing extra that closeQuietly does is check for null which anyways should throw an NPE for a null value > InputStream is not closed in two methods of JarFinder > - > > Key: HADOOP-10517 > URL: https://issues.apache.org/jira/browse/HADOOP-10517 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HADOOP-10517.1.patch.txt, hadoop-10517-v1.txt, > hadoop-10517-v2.txt > > > JarFinder#jarDir() and JarFinder#zipDir() have such code: > {code} > InputStream is = new FileInputStream(f); > copyToZipStream(is, anEntry, zos); > {code} > The InputStream is closed in copyToZipStream() but should be enclosed in > finally block. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
[ https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10541: -- Attachment: HADOOP-10541.3.patch.txt New patch attached with minor cleanup. > InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed > > > Key: HADOOP-10541 > URL: https://issues.apache.org/jira/browse/HADOOP-10541 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.4.0 >Reporter: Ted Yu >Assignee: Swarnim Kulkarni >Priority: Minor > Attachments: HADOOP-10541.1.patch.txt, HADOOP-10541.2.patch.txt, > HADOOP-10541.3.patch.txt > > > The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf > : > {code} > InputStream is = cl.getResourceAsStream("minikdc.ldiff"); > ... > is = cl.getResourceAsStream("minikdc-krb5.conf"); > {code} > Before the second assignment, is should be closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HADOOP-10481) Fix new findbugs warnings in hadoop-auth
[ https://issues.apache.org/jira/browse/HADOOP-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni reassigned HADOOP-10481: - Assignee: Swarnim Kulkarni > Fix new findbugs warnings in hadoop-auth > > > Key: HADOOP-10481 > URL: https://issues.apache.org/jira/browse/HADOOP-10481 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-auth --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(FilterConfig): > String.getBytes() > ["org.apache.hadoop.security.authentication.server.AuthenticationFilter"] At > AuthenticationFilter.java:[lines 76-455] > [INFO] Found reliance on default encoding in > org.apache.hadoop.security.authentication.util.Signer.computeSignature(String): > String.getBytes() ["org.apache.hadoop.security.authentication.util.Signer"] > At Signer.java:[lines 34-96] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HADOOP-10479) Fix new findbugs warnings in hadoop-minikdc
[ https://issues.apache.org/jira/browse/HADOOP-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni reassigned HADOOP-10479: - Assignee: Swarnim Kulkarni > Fix new findbugs warnings in hadoop-minikdc > --- > > Key: HADOOP-10479 > URL: https://issues.apache.org/jira/browse/HADOOP-10479 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-minikdc > --- > [INFO] BugInstance size is 2 > [INFO] Error size is 0 > [INFO] Total bugs: 2 > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.initKDCServer(): new > java.io.InputStreamReader(InputStream) ["org.apache.hadoop.minikdc.MiniKdc"] > At MiniKdc.java:[lines 112-557] > [INFO] Found reliance on default encoding in > org.apache.hadoop.minikdc.MiniKdc.main(String[]): new > java.io.FileReader(File) ["org.apache.hadoop.minikdc.MiniKdc"] At > MiniKdc.java:[lines 112-557] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HADOOP-10482) Fix new findbugs warnings in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-10482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni reassigned HADOOP-10482: - Assignee: Swarnim Kulkarni > Fix new findbugs warnings in hadoop-common > -- > > Key: HADOOP-10482 > URL: https://issues.apache.org/jira/browse/HADOOP-10482 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-common --- > [INFO] BugInstance size is 97 > [INFO] Error size is 0 > [INFO] Total bugs: 97 > [INFO] Found reliance on default encoding in > org.apache.hadoop.conf.Configuration.getConfResourceAsReader(String): new > java.io.InputStreamReader(InputStream) > ["org.apache.hadoop.conf.Configuration"] At Configuration.java:[lines > 169-2642] > [INFO] Null passed for nonnull parameter of set(String, String) in > org.apache.hadoop.conf.Configuration.setPattern(String, Pattern) > ["org.apache.hadoop.conf.Configuration"] At Configuration.java:[lines > 169-2642] > [INFO] Format string should use %n rather than \n in > org.apache.hadoop.conf.ReconfigurationServlet.printHeader(PrintWriter, > String) ["org.apache.hadoop.conf.ReconfigurationServlet"] At > ReconfigurationServlet.java:[lines 44-234] > [INFO] Format string should use %n rather than \n in > org.apache.hadoop.conf.ReconfigurationServlet.printHeader(PrintWriter, > String) ["org.apache.hadoop.conf.ReconfigurationServlet"] At > ReconfigurationServlet.java:[lines 44-234] > [INFO] Found reliance on default encoding in new > org.apache.hadoop.crypto.key.KeyProvider$Metadata(byte[]): new > java.io.InputStreamReader(InputStream) > ["org.apache.hadoop.crypto.key.KeyProvider$Metadata"] At > KeyProvider.java:[lines 110-204] > [INFO] Found reliance on default encoding in > org.apache.hadoop.crypto.key.KeyProvider$Metadata.serialize(): new > java.io.OutputStreamWriter(OutputStream) > ["org.apache.hadoop.crypto.key.KeyProvider$Metadata"] At > KeyProvider.java:[lines 110-204] > [INFO] Redundant nullcheck of clazz, which is known to be non-null in > org.apache.hadoop.fs.FileSystem.createFileSystem(URI, Configuration) > ["org.apache.hadoop.fs.FileSystem"] At FileSystem.java:[lines 89-3017] > [INFO] Unread public/protected field: > org.apache.hadoop.fs.HarFileSystem$Store.endHash > ["org.apache.hadoop.fs.HarFileSystem$Store"] At HarFileSystem.java:[lines > 492-500] > [INFO] Unread public/protected field: > org.apache.hadoop.fs.HarFileSystem$Store.startHash > ["org.apache.hadoop.fs.HarFileSystem$Store"] At HarFileSystem.java:[lines > 492-500] > [INFO] Found reliance on default encoding in > org.apache.hadoop.fs.HardLink.createHardLink(File, File): new > java.io.InputStreamReader(InputStream) ["org.apache.hadoop.fs.HardLink"] At > HardLink.java:[lines 51-546] > [INFO] Found reliance on default encoding in > org.apache.hadoop.fs.HardLink.createHardLinkMult(File, String[], File, int): > new java.io.InputStreamReader(InputStream) ["org.apache.hadoop.fs.HardLink"] > At HardLink.java:[lines 51-546] > [INFO] Found reliance on default encoding in > org.apache.hadoop.fs.HardLink.getLinkCount(File): new > java.io.InputStreamReader(InputStream) ["org.apache.hadoop.fs.HardLink"] At > HardLink.java:[lines 51-546] > [INFO] Bad attempt to compute absolute value of signed random integer in > org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(String, > long, Configuration, boolean) > ["org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext"] At > LocalDirAllocator.java:[lines 247-549] > [INFO] Null passed for nonnull parameter of > org.apache.hadoop.conf.Configuration.set(String, String) in > org.apache.hadoop.fs.ftp.FTPFileSystem.initialize(URI, Configuration) > ["org.apache.hadoop.fs.ftp.FTPFileSystem"] At FTPFileSystem.java:[lines > 51-593] > [INFO] Redundant nullcheck of dirEntries, which is known to be non-null in > org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPClient, Path, boolean) > ["org.apache.hadoop.fs.ftp.FTPFileSystem"] At FTPFileSystem.java:[lines > 51-593] > [INFO] Redundant nullcheck of > org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPClient, Path), which > is known to be non-null in > org.apache.hadoop.fs.ftp.FTPFileSystem.exists(FTPClient, Path) > ["org.apache.hadoop.fs.ftp.FTPFileSystem"] At FTPFileSystem.java:[lines > 51-593] > [INFO] Found reliance on default encoding in > org.apache.hadoop.fs.shell.Display$AvroFileInputStream.read(): > String.getBytes() ["org.apache.hadoop.fs.shell.Display$AvroFileInputStream"] > At Display.java:[lines 259-309] > [INFO] Format string should use %n rather than \n in > org.apache.hadoop.fs.shell.Display$Checksum.processPath
[jira] [Assigned] (HADOOP-10478) Fix new findbugs warnings in hadoop-maven-plugins
[ https://issues.apache.org/jira/browse/HADOOP-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni reassigned HADOOP-10478: - Assignee: Swarnim Kulkarni > Fix new findbugs warnings in hadoop-maven-plugins > - > > Key: HADOOP-10478 > URL: https://issues.apache.org/jira/browse/HADOOP-10478 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > > The following findbug warning needs to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ > hadoop-maven-plugins --- > [INFO] BugInstance size is 1 > [INFO] Error size is 0 > [INFO] Total bugs: 1 > [INFO] Found reliance on default encoding in new > org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread(InputStream): new > java.io.InputStreamReader(InputStream) > ["org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread"] At > Exec.java:[lines 89-114] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs
[ https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni reassigned HADOOP-10480: - Assignee: Swarnim Kulkarni > Fix new findbugs warnings in hadoop-hdfs > > > Key: HADOOP-10480 > URL: https://issues.apache.org/jira/browse/HADOOP-10480 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Swarnim Kulkarni > Labels: newbie > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs --- > [INFO] BugInstance size is 14 > [INFO] Error size is 0 > [INFO] Total bugs: 14 > [INFO] Redundant nullcheck of curPeer, which is known to be non-null in > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() > ["org.apache.hadoop.hdfs.BlockReaderFactory"] At > BlockReaderFactory.java:[lines 68-808] > [INFO] Increment of volatile field > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery() > ["org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer"] At > DFSOutputStream.java:[lines 308-1492] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream, > DataInputStream, DataOutputStream, String, DataTransferThrottler, > DatanodeInfo[]): new java.io.FileWriter(File) > ["org.apache.hadoop.hdfs.server.datanode.BlockReceiver"] At > BlockReceiver.java:[lines 66-905] > [INFO] b must be nonnull but is marked as nullable > ["org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2"] At > DatanodeJspHelper.java:[lines 546-549] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap, > File, boolean): new java.util.Scanner(File) > ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At > BlockPoolSlice.java:[lines 58-427] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed(): > new java.util.Scanner(File) > ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At > BlockPoolSlice.java:[lines 58-427] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed(): > new java.io.FileWriter(File) > ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At > BlockPoolSlice.java:[lines 58-427] > [INFO] Redundant nullcheck of f, which is known to be non-null in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String, > Block[]) > ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl"] At > FsDatasetImpl.java:[lines 60-1910] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.namenode.FSImageUtil. FSImageUtil>(): String.getBytes() > ["org.apache.hadoop.hdfs.server.namenode.FSImageUtil"] At > FSImageUtil.java:[lines 34-89] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, > byte[], boolean): new String(byte[]) > ["org.apache.hadoop.hdfs.server.namenode.FSNamesystem"] At > FSNamesystem.java:[lines 301-7701] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream): > new java.io.PrintWriter(OutputStream, boolean) > ["org.apache.hadoop.hdfs.server.namenode.INode"] At INode.java:[lines 51-744] > [INFO] Redundant nullcheck of fos, which is known to be non-null in > org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String, > HdfsFileStatus, LocatedBlocks) > ["org.apache.hadoop.hdfs.server.namenode.NamenodeFsck"] At > NamenodeFsck.java:[lines 94-710] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]): > new java.io.PrintWriter(File) > ["org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB"] At > OfflineImageViewerPB.java:[lines 45-181] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]): > new java.io.PrintWriter(OutputStream) > ["org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB"] At > OfflineImageViewerPB.java:[lines 45-181] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10517) InputStream is not closed in two methods of JarFinder
[ https://issues.apache.org/jira/browse/HADOOP-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13988921#comment-13988921 ] Swarnim Kulkarni commented on HADOOP-10517: --- [~tedyu] Thanks for the new patch. Out of curiosity, should we use the IOUtils#closeQuietly[1] instead of nesting try/catch since we are anyways pulling in the commons-io dep? [1] http://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/IOUtils.html#closeQuietly(java.io.Closeable...) > InputStream is not closed in two methods of JarFinder > - > > Key: HADOOP-10517 > URL: https://issues.apache.org/jira/browse/HADOOP-10517 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Attachments: HADOOP-10517.1.patch.txt, hadoop-10517-v1.txt > > > JarFinder#jarDir() and JarFinder#zipDir() have such code: > {code} > InputStream is = new FileInputStream(f); > copyToZipStream(is, anEntry, zos); > {code} > The InputStream is closed in copyToZipStream() but should be enclosed in > finally block. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
[ https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10541: -- Attachment: HADOOP-10541.2.patch.txt Thanks for the feedback Chris. I addressed your concerns and attached a new patch. Let me know if this looks any better. Thanks. > InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed > > > Key: HADOOP-10541 > URL: https://issues.apache.org/jira/browse/HADOOP-10541 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.4.0 >Reporter: Ted Yu >Assignee: Swarnim Kulkarni >Priority: Minor > Attachments: HADOOP-10541.1.patch.txt, HADOOP-10541.2.patch.txt > > > The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf > : > {code} > InputStream is = cl.getResourceAsStream("minikdc.ldiff"); > ... > is = cl.getResourceAsStream("minikdc-krb5.conf"); > {code} > Before the second assignment, is should be closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10517) InputStream is not closed in two methods of JarFinder
[ https://issues.apache.org/jira/browse/HADOOP-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13987743#comment-13987743 ] Swarnim Kulkarni commented on HADOOP-10517: --- This should be ready for review. > InputStream is not closed in two methods of JarFinder > - > > Key: HADOOP-10517 > URL: https://issues.apache.org/jira/browse/HADOOP-10517 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Attachments: HADOOP-10517.1.patch.txt > > > JarFinder#jarDir() and JarFinder#zipDir() have such code: > {code} > InputStream is = new FileInputStream(f); > copyToZipStream(is, anEntry, zos); > {code} > The InputStream is not closed after copy operation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
[ https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13987742#comment-13987742 ] Swarnim Kulkarni commented on HADOOP-10541: --- This should be ready for review. > InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed > > > Key: HADOOP-10541 > URL: https://issues.apache.org/jira/browse/HADOOP-10541 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Attachments: HADOOP-10541.1.patch.txt > > > The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf > : > {code} > InputStream is = cl.getResourceAsStream("minikdc.ldiff"); > ... > is = cl.getResourceAsStream("minikdc-krb5.conf"); > {code} > Before the second assignment, is should be closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10517) InputStream is not closed in two methods of JarFinder
[ https://issues.apache.org/jira/browse/HADOOP-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10517: -- Attachment: HADOOP-10517.1.patch.txt Patch attached. > InputStream is not closed in two methods of JarFinder > - > > Key: HADOOP-10517 > URL: https://issues.apache.org/jira/browse/HADOOP-10517 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Attachments: HADOOP-10517.1.patch.txt > > > JarFinder#jarDir() and JarFinder#zipDir() have such code: > {code} > InputStream is = new FileInputStream(f); > copyToZipStream(is, anEntry, zos); > {code} > The InputStream is not closed after copy operation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10517) InputStream is not closed in two methods of JarFinder
[ https://issues.apache.org/jira/browse/HADOOP-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10517: -- Status: Patch Available (was: Open) > InputStream is not closed in two methods of JarFinder > - > > Key: HADOOP-10517 > URL: https://issues.apache.org/jira/browse/HADOOP-10517 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Attachments: HADOOP-10517.1.patch.txt > > > JarFinder#jarDir() and JarFinder#zipDir() have such code: > {code} > InputStream is = new FileInputStream(f); > copyToZipStream(is, anEntry, zos); > {code} > The InputStream is not closed after copy operation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
[ https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10541: -- Attachment: HADOOP-10541.1.patch.txt Patch attached. > InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed > > > Key: HADOOP-10541 > URL: https://issues.apache.org/jira/browse/HADOOP-10541 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Attachments: HADOOP-10541.1.patch.txt > > > The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf > : > {code} > InputStream is = cl.getResourceAsStream("minikdc.ldiff"); > ... > is = cl.getResourceAsStream("minikdc-krb5.conf"); > {code} > Before the second assignment, is should be closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
[ https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HADOOP-10541: -- Status: Patch Available (was: Open) > InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed > > > Key: HADOOP-10541 > URL: https://issues.apache.org/jira/browse/HADOOP-10541 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Attachments: HADOOP-10541.1.patch.txt > > > The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf > : > {code} > InputStream is = cl.getResourceAsStream("minikdc.ldiff"); > ... > is = cl.getResourceAsStream("minikdc-krb5.conf"); > {code} > Before the second assignment, is should be closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
[ https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13987356#comment-13987356 ] Swarnim Kulkarni commented on HADOOP-10541: --- [~tedyu] If you are ok with it, I'll like to take a stab at this. > InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed > > > Key: HADOOP-10541 > URL: https://issues.apache.org/jira/browse/HADOOP-10541 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > > The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf > : > {code} > InputStream is = cl.getResourceAsStream("minikdc.ldiff"); > ... > is = cl.getResourceAsStream("minikdc-krb5.conf"); > {code} > Before the second assignment, is should be closed. -- This message was sent by Atlassian JIRA (v6.2#6252)