[jira] [Commented] (HADOOP-14383) Implement FileSystem that reads from HTTP / HTTPS endpoints

2017-05-05 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999317#comment-15999317
 ] 

Li Lu commented on HADOOP-14383:


Thanks for the work [~wheat9]! This is useful. The previous Jenkins run reveals 
several findbugs warnings but they appear to be existing ones in our codebase. 
+1 pending Jenkins. 

> Implement FileSystem that reads from HTTP / HTTPS endpoints
> ---
>
> Key: HADOOP-14383
> URL: https://issues.apache.org/jira/browse/HADOOP-14383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HADOOP-14383.000.patch, HADOOP-14383.001.patch
>
>
> We have a use case where YARN applications would like to localize resources 
> from Artifactory. Putting the resources on HDFS itself might not be ideal as 
> we would like to leverage Artifactory to manage different versions of the 
> resources.
> It would be nice to have something like {{HttpFileSystem}} that implements 
> the Hadoop filesystem API and reads from a HTTP endpoint.
> Note that Samza has implemented the proposal by themselves:
> https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala
> The downside of this approach is that it requires the YARN cluster to put the 
> Samza jar into the classpath for each NM.
> It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11461) Namenode stdout log contains IllegalAccessException

2016-07-22 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390201#comment-15390201
 ] 

Li Lu commented on HADOOP-11461:


We ran into this issue with YARN timeline server v1.5 again. Seems like in 
Jersey 1.9 the logging is not performed through log4j, so changing the log4j 
settings was not helpful. Jersey uses java's logging.logger instead, so 
changing java's logging properties file and adding 
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.level = 
OFF appears to work for us. The trunk code does not have this problem because 
it's using a later version of Jersey. 

> Namenode stdout log contains IllegalAccessException
> ---
>
> Key: HADOOP-11461
> URL: https://issues.apache.org/jira/browse/HADOOP-11461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> We frequently see the following exception in namenode out log file.
> {noformat}
> Nov 19, 2014 8:11:19 PM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
> attachTypes
> INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response
> Nov 19, 2014 8:11:19 PM 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 
> resolve
> SEVERE: null
> java.lang.IllegalAccessException: Class 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can 
> not access a member of class javax.ws.rs.co
> re.Response with modifiers "protected"
> at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
> at java.lang.Class.newInstance(Class.java:368)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467)
> at 
> com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181)
> at 
> com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518)
> at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:124)
> at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104)
> at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120)
> at 
> com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.mortbay.jetty.servl

[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-18 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15289833#comment-15289833
 ] 

Li Lu commented on HADOOP-9613:
---

BTW has anyone taken a look at the -1s sent by Jenkins? Any quick notes w.r.t 
the checkstyle warnings? 

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-18 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15289825#comment-15289825
 ] 

Li Lu commented on HADOOP-9613:
---

Tried the latest incompatible patch with the latest YARN-2928 branch. There are 
a few trivial patch apply failures but easily fixable (since the YARN-2928 
branch is not up-to-date to the latest trunk). Tried all UTs introduced in the 
branch and with my local deployment. All things seems to be fine. cc/[~sjlee0]

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-13 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-13083:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, and branch-2.8. Thanks [~GergelyNovak] for the 
patch and [~andrew.wang] for the review! 

> The number of javadocs warnings is limited to 100 
> --
>
> Key: HADOOP-13083
> URL: https://issues.apache.org/jira/browse/HADOOP-13083
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Li Lu
>Assignee: Gergely Novák
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-4978.001.patch
>
>
> We are generating a lot of javadoc warnings with jdk 1.8. Right now the 
> number is limited to 100. Enlarge this limitation can probably reveal more 
> problems in one batch for our javadoc generation process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-13 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15283105#comment-15283105
 ] 

Li Lu commented on HADOOP-13083:


+1. Will commit shortly. 

> The number of javadocs warnings is limited to 100 
> --
>
> Key: HADOOP-13083
> URL: https://issues.apache.org/jira/browse/HADOOP-13083
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Li Lu
>Assignee: Gergely Novák
>Priority: Critical
> Attachments: YARN-4978.001.patch
>
>
> We are generating a lot of javadoc warnings with jdk 1.8. Right now the 
> number is limited to 100. Enlarge this limitation can probably reveal more 
> problems in one batch for our javadoc generation process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-12 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15282029#comment-15282029
 ] 

Li Lu commented on HADOOP-13083:


Thanks [~andrew.wang]. I'll wait 24 hrs and then commit this fix. 

> The number of javadocs warnings is limited to 100 
> --
>
> Key: HADOOP-13083
> URL: https://issues.apache.org/jira/browse/HADOOP-13083
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Li Lu
>Assignee: Gergely Novák
>Priority: Critical
> Attachments: YARN-4978.001.patch
>
>
> We are generating a lot of javadoc warnings with jdk 1.8. Right now the 
> number is limited to 100. Enlarge this limitation can probably reveal more 
> problems in one batch for our javadoc generation process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-12 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-13083:
---
Priority: Critical  (was: Major)

> The number of javadocs warnings is limited to 100 
> --
>
> Key: HADOOP-13083
> URL: https://issues.apache.org/jira/browse/HADOOP-13083
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Gergely Novák
>Priority: Critical
> Attachments: YARN-4978.001.patch
>
>
> We are generating a lot of javadoc warnings with jdk 1.8. Right now the 
> number is limited to 100. Enlarge this limitation can probably reveal more 
> problems in one batch for our javadoc generation process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-09 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277399#comment-15277399
 ] 

Li Lu commented on HADOOP-12563:


Thanks [~mattpaduano]! 

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-09 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277386#comment-15277386
 ] 

Li Lu commented on HADOOP-12563:


Thanks [~mattpaduano]! I'm wondering if we want to have a configuration for 
default token file version, so that we can still run apps like Tez when they 
are gradually adopting the new format? 

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-09 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277375#comment-15277375
 ] 

Li Lu commented on HADOOP-12563:


bq. future code can read legacy tokens, but legacy code can not ready future 
tokens.
Sure, this is fine for trunk. At the same time it means older clients (yarn 2 
clients) may have problems talking to yarn 3 servers, that's why I'm adding an 
incompatible flag here. 

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-09 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277361#comment-15277361
 ] 

Li Lu commented on HADOOP-12563:


bq. that looks like an bona fide IO error to me...
Sorry I'm confused, could you please elaborate more on this? 

The exception was generated from readTokenStorageStream, where we check the 
version number of the token storage. I'm not sure why 1 is an unknown version 
in the version check...

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-09 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277321#comment-15277321
 ] 

Li Lu commented on HADOOP-12563:


I noticed the following errors when running tez on our trunk code:
{code}
2016-05-09 16:12:18,733 [ERROR] [main] |app.DAGAppMaster|: Error starting 
DAGAppMaster
java.io.IOException: Exception reading 
/tmp/hadoop-llu/nm-local-dir/usercache/llu/appcache/application_1462833671675_0001/container_e03_1462833671675_0001_01_01/container_tokens
at 
org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:197)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:789)
at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:748)
at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:621)
at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2345)
Caused by: java.io.IOException: Unknown version 1 in token storage.
at 
org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:215)
at 
org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:194)
... 4 more
{code}

Both Tez and Hadoop are from the latest master/trunk.

Looks like we introduced some incompatible changes in this JIRA? I marked this 
JIRA as an incompatible change for now. 

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-05-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-12563:
---
Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-03 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269724#comment-15269724
 ] 

Li Lu commented on HADOOP-13083:


Moved the patch to HADOOP. The fix looks good to me but I would like to check 
if this will break anything on common and/or HDFS. 

> The number of javadocs warnings is limited to 100 
> --
>
> Key: HADOOP-13083
> URL: https://issues.apache.org/jira/browse/HADOOP-13083
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Gergely Novák
> Attachments: YARN-4978.001.patch
>
>
> We are generating a lot of javadoc warnings with jdk 1.8. Right now the 
> number is limited to 100. Enlarge this limitation can probably reveal more 
> problems in one batch for our javadoc generation process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13083) The number of javadocs warnings is limited to 100

2016-05-03 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu moved YARN-4978 to HADOOP-13083:
--

Key: HADOOP-13083  (was: YARN-4978)
Project: Hadoop Common  (was: Hadoop YARN)

> The number of javadocs warnings is limited to 100 
> --
>
> Key: HADOOP-13083
> URL: https://issues.apache.org/jira/browse/HADOOP-13083
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Gergely Novák
> Attachments: YARN-4978.001.patch
>
>
> We are generating a lot of javadoc warnings with jdk 1.8. Right now the 
> number is limited to 100. Enlarge this limitation can probably reveal more 
> problems in one batch for our javadoc generation process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12906) AuthenticatedURL should convert a 404/Not Found into an FileNotFoundException.

2016-03-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-12906:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

I committed this patch into trunk and branch-2. Thanks [~ste...@apache.org] for 
the work and [~liuml07] for the quick review! Given the fact that this patch is 
small, I'm also fine with cherry-picking it to branch-2.8. 

> AuthenticatedURL should convert a 404/Not Found into an 
> FileNotFoundException. 
> ---
>
> Key: HADOOP-12906
> URL: https://issues.apache.org/jira/browse/HADOOP-12906
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-12906-001.patch
>
>
> If you ask for a URL that isn't there, {{AuthenticatedURL}} raises an 
> exception saying you are unauthed. 
> It's not checking the response code; 404 is an error all of its own, which 
> can be uprated as a FileNotFound Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12906) AuthenticatedURL should convert a 404/Not Found into an FileNotFoundException.

2016-03-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-12906:
---
Summary: AuthenticatedURL should convert a 404/Not Found into an 
FileNotFoundException.   (was: AuthenticatedURL translates a 404/Not Found into 
an AuthenticationException. It isn't)

> AuthenticatedURL should convert a 404/Not Found into an 
> FileNotFoundException. 
> ---
>
> Key: HADOOP-12906
> URL: https://issues.apache.org/jira/browse/HADOOP-12906
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12906-001.patch
>
>
> If you ask for a URL that isn't there, {{AuthenticatedURL}} raises an 
> exception saying you are unauthed. 
> It's not checking the response code; 404 is an error all of its own, which 
> can be uprated as a FileNotFound Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12906) AuthenticatedURL translates a 404/Not Found into an AuthenticationException. It isn't

2016-03-10 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15189817#comment-15189817
 ] 

Li Lu commented on HADOOP-12906:


Patch LGTM. +1. Will commit shortly. 

> AuthenticatedURL translates a 404/Not Found into an AuthenticationException. 
> It isn't
> -
>
> Key: HADOOP-12906
> URL: https://issues.apache.org/jira/browse/HADOOP-12906
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12906-001.patch
>
>
> If you ask for a URL that isn't there, {{AuthenticatedURL}} raises an 
> exception saying you are unauthed. 
> It's not checking the response code; 404 is an error all of its own, which 
> can be uprated as a FileNotFound Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-27 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-12831:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed the latest patch into trunk, branch-2 and branch-2.8. Thanks 
[~liuml07] for the contribution. Thanks [~ste...@apache.org] for the review! 

> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12831.000.patch, HADOOP-12831.001.patch, 
> HADOOP-12831.002.patch
>
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-27 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15170893#comment-15170893
 ] 

Li Lu commented on HADOOP-12831:


+1, will commit shortly. 

> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12831.000.patch, HADOOP-12831.001.patch, 
> HADOOP-12831.002.patch
>
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-26 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15170031#comment-15170031
 ] 

Li Lu commented on HADOOP-12831:


Latest patch LGTM. Will wait for about a day then commit. [~ste...@apache.org] 
I believe you were talking about a fix like this, so would you please double 
check it? Thanks! 

> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12831.000.patch, HADOOP-12831.001.patch, 
> HADOOP-12831.002.patch
>
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-23 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15159985#comment-15159985
 ] 

Li Lu commented on HADOOP-12831:


Thanks. Sorry I missed one point just now, DataChecksum appears to be a little 
bit too general for this fix? 

Also, [~ste...@apache.org] would you mind to double check it to see if this is 
the proper fix? Thanks! 

> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12831.000.patch, HADOOP-12831.001.patch
>
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-23 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15159691#comment-15159691
 ] 

Li Lu commented on HADOOP-12831:


Thanks for the quick patch [~liuml07]! Yes a precondition may make sense. 

One nit:
{code}
bytes per checksum too small
{code}
Maybe we can explicitly tell the user what is the valid range of the value? 
"Too small" looks very vague. 


> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12831.000.patch
>
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8779) Use tokens regardless of authentication type

2016-02-01 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15127374#comment-15127374
 ] 

Li Lu commented on HADOOP-8779:
---

Agreed. IIUC tokens generally represent authorizations. We may want to further 
separate authentication (who am I) and authorization (what can I do) in Hadoop. 
Having tokens as a fundamental concept for all authentication types may help to 
support HDFS multi-tenancy and integration with YARN. 

> Use tokens regardless of authentication type
> 
>
> Key: HADOOP-8779
> URL: https://issues.apache.org/jira/browse/HADOOP-8779
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, security
>Affects Versions: 2.0.2-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> Security is a combination of authentication and authorization (tokens).  
> Authorization may be granted independently of the authentication model.  
> Tokens should be used regardless of simple or kerberos authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975519#comment-14975519
 ] 

Li Lu commented on HADOOP-12457:


Thanks for the pointer [~ozawa]! I do remember this problem when working with 
jdiff. I can provide some more detail if you need in the new issue. 

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2015-07-14 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14627555#comment-14627555
 ] 

Li Lu commented on HADOOP-11398:


bq. the patch makes the policy stateful
I believe, as [~jingzhao] have raised, that making the policy stateful is the 
main problem for the original fix. 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch, HADOOP-11398.002.patch, 
> HADOOP-11398.003.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2015-07-14 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14627255#comment-14627255
 ] 

Li Lu commented on HADOOP-11398:


BTW, [~ajisakaa], since I've already given some thought on the original patch 
(and the whole issue), I think a little bit more discussion would definitely 
save a lot of repeated work here. If there's an appropriate fix for this issue 
I can certainly make the change and finish it. 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch, HADOOP-11398.002.patch, 
> HADOOP-11398.003.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2015-07-14 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14627220#comment-14627220
 ] 

Li Lu commented on HADOOP-11398:


Hi [~ajisakaa], I think your 003 patch changes retry policy from stateless to 
stateful again. Am I missing anything here? 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch, HADOOP-11398.002.patch, 
> HADOOP-11398.003.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2015-07-14 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14627032#comment-14627032
 ] 

Li Lu commented on HADOOP-11398:


bq. We cannot assume t1=t2 because t2 becomes larger than t1 when there are 
underlying retries.
If this is the case, then only allowing retries to t1+delta_t is wrong because 
we may accidentally but completely disable retry when t1 and t2 are 
significantly different. What we want is to "retry for time t after an error 
occurred" but not to "retry for time t after the retry object is initialized". 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch, HADOOP-11398.002.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2015-07-10 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14622604#comment-14622604
 ] 

Li Lu commented on HADOOP-11398:


Hi [~ajisakaa], there is one issue with your current fix: we need to decide the 
timeLimit on the time the first retry happens, but not on the object creation. 
Say a retry object is created at t1 with maximum time delta_t, an actual retry 
happens at t2. We want the retry to stop at t2+delta_t, but not t1+delta_t. 
Actually there may be a quite significant gap between t1 and t2, so setting 
timeLimit to t1+delta_t may not be right. I'm not sure if in all of our use 
cases we can safely assume or enforce t1=t2. 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch, HADOOP-11398.002.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-06-24 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14600196#comment-14600196
 ] 

Li Lu commented on HADOOP-12116:


Hi [~aw], I agree with you that in the use case, mixing the new script with old 
home dir may be problematic. We're fixing them right now. Meanwhile, the 
problem exposed here is valid since we need to conservatively treat unset 
$cygwin as false and not triggering the branch? 

> Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
> branch-2
> -
>
> Key: HADOOP-12116
> URL: https://issues.apache.org/jira/browse/HADOOP-12116
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-12116-branch-2.001.patch
>
>
> We're using syntax like "if $cygwin; then" which may be errorounsly evaluated 
> into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-06-24 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14600059#comment-14600059
 ] 

Li Lu commented on HADOOP-12116:


Let me elaborate a little bit more on this issue. We discovered this problem 
when testing rolling upgrades. When running a hadoop streaming job which runs 
"hdfs dfs -stat", if HADOOP_HOME is set to the old version of Hadoop but the 
current hdfs script is from the new version, we're losing all CLASSPATH and 
cannot find any classes. The reason for this is we did not set $cygwin in old 
scripts, so $cygwin remains unset when the new script evaluates "if $cygwin; 
then". Therefore, we'll set cygwin classpaths for all cases which will set 
classpath to empty (since we do not have cygwin). 

Given this fact, [~vinodkv], do you agree that we need to include this fix in 
2.7.1? Thanks! 

> Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
> branch-2
> -
>
> Key: HADOOP-12116
> URL: https://issues.apache.org/jira/browse/HADOOP-12116
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-12116-branch-2.001.patch
>
>
> We're using syntax like "if $cygwin; then" which may be errorounsly evaluated 
> into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-06-24 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-12116:
---
Attachment: HADOOP-12116-branch-2.001.patch

In this patch I enforced $cygwin to be true to trigger cygwin related env 
settings. In this way if $cygwin is unset we will not trigger those branches. 

> Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
> branch-2
> -
>
> Key: HADOOP-12116
> URL: https://issues.apache.org/jira/browse/HADOOP-12116
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-12116-branch-2.001.patch
>
>
> We're using syntax like "if $cygwin; then" which may be errorounsly evaluated 
> into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-06-24 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-12116:
---
Status: Patch Available  (was: Open)

> Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
> branch-2
> -
>
> Key: HADOOP-12116
> URL: https://issues.apache.org/jira/browse/HADOOP-12116
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-12116-branch-2.001.patch
>
>
> We're using syntax like "if $cygwin; then" which may be errorounsly evaluated 
> into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-06-24 Thread Li Lu (JIRA)
Li Lu created HADOOP-12116:
--

 Summary: Fix unrecommended syntax usages in hadoop/hdfs/yarn 
script for cygwin in branch-2
 Key: HADOOP-12116
 URL: https://issues.apache.org/jira/browse/HADOOP-12116
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


We're using syntax like "if $cygwin; then" which may be errorounsly evaluated 
into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11814:
---
Status: Patch Available  (was: Open)

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
> Attachments: HADOOP-11814-040815.patch
>
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11814:
---
Attachment: HADOOP-11814-040815.patch

Upload a patch to reformat all classes in o.a.h.classification.tools, replaced 
all tabs into spaces. 

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
> Attachments: HADOOP-11814-040815.patch
>
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11814) Reformat hadoop-annotations, o.a.h.classification.tools

2015-04-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11814:
---
Summary: Reformat hadoop-annotations, o.a.h.classification.tools  (was: 
Reformat hadoop-annotations/RootDocProcessor)

> Reformat hadoop-annotations, o.a.h.classification.tools
> ---
>
> Key: HADOOP-11814
> URL: https://issues.apache.org/jira/browse/HADOOP-11814
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: formatting
>
> RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
> indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11814) Reformat hadoop-annotations/RootDocProcessor

2015-04-08 Thread Li Lu (JIRA)
Li Lu created HADOOP-11814:
--

 Summary: Reformat hadoop-annotations/RootDocProcessor
 Key: HADOOP-11814
 URL: https://issues.apache.org/jira/browse/HADOOP-11814
 Project: Hadoop Common
  Issue Type: Task
Reporter: Li Lu
Assignee: Li Lu
Priority: Minor


RootDocProcessor has some indentation problems. It mixes tabs and spaces for 
indentation. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11776) jdiff is broken in Hadoop 2

2015-04-06 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14481438#comment-14481438
 ] 

Li Lu commented on HADOOP-11776:


Thanks [~vinodkv] for the review and commit. Yes we do have more things to do 
for tools to check API compatibility. Given the current status our long term 
goal may be replacing it with some other tool. 

> jdiff is broken in Hadoop 2
> ---
>
> Key: HADOOP-11776
> URL: https://issues.apache.org/jira/browse/HADOOP-11776
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HADOOP-11776-040115.patch
>
>
> Seems like we haven't touch the API files from jdiff under dev-support for a 
> while. For now we're missing the jdiff API files for hadoop 2. We're also 
> missing YARN when generating the jdiff API files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11776) jdiff is broken in Hadoop 2

2015-04-02 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14393077#comment-14393077
 ] 

Li Lu commented on HADOOP-11776:


Tested the 3 failed tests locally, could reproduce none of them. The failed UTs 
appears to be irrelevant to the changes here. 

> jdiff is broken in Hadoop 2
> ---
>
> Key: HADOOP-11776
> URL: https://issues.apache.org/jira/browse/HADOOP-11776
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Blocker
> Attachments: HADOOP-11776-040115.patch
>
>
> Seems like we haven't touch the API files from jdiff under dev-support for a 
> while. For now we're missing the jdiff API files for hadoop 2. We're also 
> missing YARN when generating the jdiff API files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11776) jdiff is broken in Hadoop 2

2015-04-01 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11776:
---
Attachment: HADOOP-11776-040115.patch

Worked on this issue on top of [~ozawa]'s HADOOP-11377 patch. That patch 
addressed the Null.java problem for me. In this patch I'm doing the following 
things:
# Applied a quick fix for the jdiff compatibility param settings to eliminate 
the package "" error. I'd appreciate if there's background information about 
this setting, so that we can decide the correct long term fix. 
# Run the maven script against branch-2.6.0 to generate two sample API files, 
one for hadoop-common-project/hadoop-common and one for 
hadoo-hdfs-project/hadoop-hdfs
# Fix the SAXParser not found exception when running jdiff. 

After this fix, hadoop hdfs can have API diff report after {{mvn package -Pdocs 
-DskipTests}} in its target/site/jdiff/xml folder. Hadoop common still has some 
problem with jdiff: jdiff is complainging the following:

  [javadoc] Error: duplicate comment id: 
org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
java.lang.String, T)

Since this is the only error we got in all components (we're also exploring 
YARN-3426 for yarn-api, yarn-client, yarn-common, and yarn-server-common), 
after briefly checking the original code of jdiff I highly suspect the error is 
triggered by a bug of jdiff. However, I'm not sure if jdiff is still 
maintained, so for long term fix we may want to find some alternative tools. 

So far the script would generate a diff report for hdfs. I'm extending the YARN 
part in YARN-3426. 

The patch in HADOOP-11377 appears to work on my local machine. 

> jdiff is broken in Hadoop 2
> ---
>
> Key: HADOOP-11776
> URL: https://issues.apache.org/jira/browse/HADOOP-11776
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Blocker
> Attachments: HADOOP-11776-040115.patch
>
>
> Seems like we haven't touch the API files from jdiff under dev-support for a 
> while. For now we're missing the jdiff API files for hadoop 2. We're also 
> missing YARN when generating the jdiff API files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11776) jdiff is broken in Hadoop 2

2015-04-01 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11776:
---
Status: Patch Available  (was: Open)

> jdiff is broken in Hadoop 2
> ---
>
> Key: HADOOP-11776
> URL: https://issues.apache.org/jira/browse/HADOOP-11776
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Blocker
> Attachments: HADOOP-11776-040115.patch
>
>
> Seems like we haven't touch the API files from jdiff under dev-support for a 
> while. For now we're missing the jdiff API files for hadoop 2. We're also 
> missing YARN when generating the jdiff API files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11377) jdiff failing on java 7 and java 8, "Null.java" not found

2015-03-31 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14389064#comment-14389064
 ] 

Li Lu commented on HADOOP-11377:


+1, patch LGTM and fixed the error on my local machine. 

> jdiff failing on java 7 and java 8, "Null.java" not found
> -
>
> Key: HADOOP-11377
> URL: https://issues.apache.org/jira/browse/HADOOP-11377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.6.0, 2.7.0
> Environment: Java8 jenkins
>Reporter: Steve Loughran
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11377.001.patch
>
>
> Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
> {{Null}} datatype
> {code}
> '
> The ' characters around the executable and arguments are
> not part of the command.
>   [javadoc] javadoc: error - Illegal package name: ""
>   [javadoc] javadoc: error - File not found: 
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11377) jdiff failing on java 7 and java 8, "Null.java" not found

2015-03-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14388078#comment-14388078
 ] 

Li Lu commented on HADOOP-11377:


Hi [~ozawa], I'll fix the whole package in HADOOP-11776, but my change is 
depending on your patch in this JIRA. So maybe you'd like to go ahead with the 
current one so that I can work on the next step? Thanks! 

> jdiff failing on java 7 and java 8, "Null.java" not found
> -
>
> Key: HADOOP-11377
> URL: https://issues.apache.org/jira/browse/HADOOP-11377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.6.0, 2.7.0
> Environment: Java8 jenkins
>Reporter: Steve Loughran
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11377.001.patch
>
>
> Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
> {{Null}} datatype
> {code}
> '
> The ' characters around the executable and arguments are
> not part of the command.
>   [javadoc] javadoc: error - Illegal package name: ""
>   [javadoc] javadoc: error - File not found: 
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11377) jdiff failing on java 7 and java 8, "Null.java" not found

2015-03-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14388031#comment-14388031
 ] 

Li Lu commented on HADOOP-11377:


I did some investigation and I think I know the direct cause of the {{javadoc: 
error - Illegal package name: ""}} now. It looks like the jdiff.compatibility 
is set to empty at the very beginning of hadoop-project-dist/pom.xml, which may 
cause ant to pass that empyt line as "". Javadoc may understand that as a 
package name. After I comment that line out the error disappeared. 

> jdiff failing on java 7 and java 8, "Null.java" not found
> -
>
> Key: HADOOP-11377
> URL: https://issues.apache.org/jira/browse/HADOOP-11377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.6.0, 2.7.0
> Environment: Java8 jenkins
>Reporter: Steve Loughran
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11377.001.patch
>
>
> Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
> {{Null}} datatype
> {code}
> '
> The ' characters around the executable and arguments are
> not part of the command.
>   [javadoc] javadoc: error - Illegal package name: ""
>   [javadoc] javadoc: error - File not found: 
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11377) jdiff failing on java 7 and java 8, "Null.java" not found

2015-03-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387660#comment-14387660
 ] 

Li Lu commented on HADOOP-11377:


Hi [~ozawa]! Seems like the illegal package name problem also blocks us from 
getting jdiff results. Any hints/backgrounds to resolve that? Thanks! 

> jdiff failing on java 7 and java 8, "Null.java" not found
> -
>
> Key: HADOOP-11377
> URL: https://issues.apache.org/jira/browse/HADOOP-11377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.6.0, 2.7.0
> Environment: Java8 jenkins
>Reporter: Steve Loughran
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11377.001.patch
>
>
> Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
> {{Null}} datatype
> {code}
> '
> The ' characters around the executable and arguments are
> not part of the command.
>   [javadoc] javadoc: error - Illegal package name: ""
>   [javadoc] javadoc: error - File not found: 
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11776) jdiff is broken in Hadoop 2

2015-03-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387571#comment-14387571
 ] 

Li Lu commented on HADOOP-11776:


Hi [~busbey], thanks for the pointer! Will definitely look into that. 

> jdiff is broken in Hadoop 2
> ---
>
> Key: HADOOP-11776
> URL: https://issues.apache.org/jira/browse/HADOOP-11776
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Blocker
>
> Seems like we haven't touch the API files from jdiff under dev-support for a 
> while. For now we're missing the jdiff API files for hadoop 2. We're also 
> missing YARN when generating the jdiff API files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11776) jdiff is broken in Hadoop 2

2015-03-30 Thread Li Lu (JIRA)
Li Lu created HADOOP-11776:
--

 Summary: jdiff is broken in Hadoop 2
 Key: HADOOP-11776
 URL: https://issues.apache.org/jira/browse/HADOOP-11776
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu
Priority: Blocker


Seems like we haven't touch the API files from jdiff under dev-support for a 
while. For now we're missing the jdiff API files for hadoop 2. We're also 
missing YARN when generating the jdiff API files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-27 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Attachment: HADOOP-11761-032715.patch

Thanks [~wheat9] for the review comments! I rechecked our solution to the (now 
test only) StringSignerSecretProvider. Since we're exempting 
StringSignerSecretProvider for findbugs I'm doing the same thing with 
FileSignerSecretProvider. 

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch, HADOOP-11761-032715.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Labels: findbugs  (was: )

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Attachment: HADOOP-11761-032615.patch

This issue looks really weird as I cleaned all findbugs warnings in 
HADOOP-11379. After looking into it, seems like the fix in HADOOP-10670 
introduce the warning to the current location. In the findbugs log of 
HADOOP-10670, I can notice the following lines:
{code}
==
==
Determining number of patched Findbugs warnings.
==
==


  Running findbugs in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
/home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
-DskipTests -DHadoopPatchProcess < /dev/null > 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/../patchprocess/patchFindBugsOutputhadoop-mapreduce-client-core.txt
 2>&1
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build
  Running findbugs in hadoop-tools/hadoop-archives
/home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
-DskipTests -DHadoopPatchProcess < /dev/null > 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/../patchprocess/patchFindBugsOutputhadoop-archives.txt
 2>&1
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build
Found 0 Findbugs warnings 
(/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-tools/hadoop-archives/target/findbugsXml.xml)
Found 0 Findbugs warnings 
(/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/findbugsXml.xml)
{code}
So apparently Jenkins ran findbugs against a wrong place on HADOOP-10670. I 
reran findbugs locally against hadoop-auth and now the warning is gone after 
this quick fix. 

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Status: Patch Available  (was: Open)

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
> Attachments: HADOOP-11761-032615.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Priority: Minor  (was: Major)

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)
Li Lu created HADOOP-11761:
--

 Summary: Fix findbugs warnings in 
org.apache.hadoop.security.authentication
 Key: HADOOP-11761
 URL: https://issues.apache.org/jira/browse/HADOOP-11761
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382953#comment-14382953
 ] 

Li Lu commented on HADOOP-11748:


Thanks [~wheat9] for continuing on this. The fix on TestAuthenticationFilter 
looks good to me. 

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11748:
---
Attachment: HADOOP-11748-032615-poc.patch

Did some work to change the {{StringSecretProvider}} class to be test only. 
Most work done but TestAuthenticationFilter is failing because we're changing 
the default filters. In a comprehensive fix we need to change the mockito 
settings in TestAuthenticationFilter to create {{StringSecretProvider}}s in 
{{config}} objects. 

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-25 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380930#comment-14380930
 ] 

Li Lu commented on HADOOP-11748:


{{StringSecretProvider}} is heavily used in our existing unit tests, in 
{{o.a.h.security.authentication}} packages and {{o.a.h.fs.http.server}}. Maybe 
we want to keep it as {{VisibleForTesting}}, and disable all configuration 
related initializations? One step forward, if we can eliminate its usage in 
{{TestHttpFSServer}}, we can further reduce the visibility of this class. 
Please feel free to let me know your preference. 

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-25 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu reassigned HADOOP-11748:
--

Assignee: Li Lu

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11728) Try merging USER_FACING_URLS and ALL_URLS

2015-03-18 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11728:
---
Status: Patch Available  (was: Open)

> Try merging USER_FACING_URLS and ALL_URLS
> -
>
> Key: HADOOP-11728
> URL: https://issues.apache.org/jira/browse/HADOOP-11728
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11728-031815.patch
>
>
> Per discussion in HADOOP-10703 and YARN-3087, we need to investigate if it's 
> possible to merge USER_FACING_URLS and ALL_URLS in HttpServer2. This would 
> streamline the process to create filters. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11728) Try merging USER_FACING_URLS and ALL_URLS

2015-03-18 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11728:
---
Attachment: HADOOP-11728-031815.patch

In this patch I merged USER_FACING_URLS with ALL_URLS since we no longer need 
special treatments on jsp files. 

> Try merging USER_FACING_URLS and ALL_URLS
> -
>
> Key: HADOOP-11728
> URL: https://issues.apache.org/jira/browse/HADOOP-11728
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11728-031815.patch
>
>
> Per discussion in HADOOP-10703 and YARN-3087, we need to investigate if it's 
> possible to merge USER_FACING_URLS and ALL_URLS in HttpServer2. This would 
> streamline the process to create filters. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11728) Try merging USER_FACING_URLS and ALL_URLS

2015-03-18 Thread Li Lu (JIRA)
Li Lu created HADOOP-11728:
--

 Summary: Try merging USER_FACING_URLS and ALL_URLS
 Key: HADOOP-11728
 URL: https://issues.apache.org/jira/browse/HADOOP-11728
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


Per discussion in HADOOP-10703 and YARN-3087, we need to investigate if it's 
possible to merge USER_FACING_URLS and ALL_URLS in HttpServer2. This would 
streamline the process to create filters. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10478) Fix new findbugs warnings in hadoop-maven-plugins

2015-02-23 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-10478:
---
Attachment: HADOOP-10478-022315.patch

I looked into this and fixed the encoding problem that triggers the findbugs 
warning. 

> Fix new findbugs warnings in hadoop-maven-plugins
> -
>
> Key: HADOOP-10478
> URL: https://issues.apache.org/jira/browse/HADOOP-10478
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Li Lu
>  Labels: newbie
> Attachments: HADOOP-10478-022315.patch
>
>
> The following findbug warning needs to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ 
> hadoop-maven-plugins ---
> [INFO] BugInstance size is 1
> [INFO] Error size is 0
> [INFO] Total bugs: 1
> [INFO] Found reliance on default encoding in new 
> org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread(InputStream): new 
> java.io.InputStreamReader(InputStream) 
> ["org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread"] At 
> Exec.java:[lines 89-114]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10478) Fix new findbugs warnings in hadoop-maven-plugins

2015-02-23 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-10478:
---
Status: Patch Available  (was: Open)

> Fix new findbugs warnings in hadoop-maven-plugins
> -
>
> Key: HADOOP-10478
> URL: https://issues.apache.org/jira/browse/HADOOP-10478
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Li Lu
>  Labels: newbie
> Attachments: HADOOP-10478-022315.patch
>
>
> The following findbug warning needs to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ 
> hadoop-maven-plugins ---
> [INFO] BugInstance size is 1
> [INFO] Error size is 0
> [INFO] Total bugs: 1
> [INFO] Found reliance on default encoding in new 
> org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread(InputStream): new 
> java.io.InputStreamReader(InputStream) 
> ["org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread"] At 
> Exec.java:[lines 89-114]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10478) Fix new findbugs warnings in hadoop-maven-plugins

2015-02-23 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu reassigned HADOOP-10478:
--

Assignee: Li Lu  (was: Haohui Mai)

> Fix new findbugs warnings in hadoop-maven-plugins
> -
>
> Key: HADOOP-10478
> URL: https://issues.apache.org/jira/browse/HADOOP-10478
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Li Lu
>  Labels: newbie
>
> The following findbug warning needs to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ 
> hadoop-maven-plugins ---
> [INFO] BugInstance size is 1
> [INFO] Error size is 0
> [INFO] Total bugs: 1
> [INFO] Found reliance on default encoding in new 
> org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread(InputStream): new 
> java.io.InputStreamReader(InputStream) 
> ["org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread"] At 
> Exec.java:[lines 89-114]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247778#comment-14247778
 ] 

Li Lu commented on HADOOP-11387:


The two unit test failures are both timeout-related. They appears to by 
unrelated to the changes in this JIRA and I could not reproduce them. From the 
report, the two findbugs warnings are completely orthogonal to this JIRA. I'm 
looking into the performance issues, though. 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11387:
---
Attachment: HADOOP-11387-121514-2.patch

Move the catch ExecutionExection statement into the loader to make the overall 
load logic cleaner. 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11387:
---
Attachment: HADOOP-11387-121514-1.patch

Thanks [~wheat9] for pointing this out. I fixed the overlooked exception 
handling in this patch. 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247377#comment-14247377
 ] 

Li Lu commented on HADOOP-11387:


Hi [~jeagles], sorry I'm a little bit confused here: I think this cache is only 
used by hdfs clients. In this use case there won't be significant number of 
hostnames in this cache. Am I missing anything here? Thanks! 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11387:
---
Status: Patch Available  (was: Open)

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11387:
---
Attachment: HADOOP-11387-121514.patch

In this patch I used LoadingCache to replace ConcurrentHashMap. After this 
change the logic of canonicalizedHostCache is simplified. 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu reassigned HADOOP-11387:
--

Assignee: Li Lu  (was: Haohui Mai)

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2014-12-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247146#comment-14247146
 ] 

Li Lu commented on HADOOP-11398:


Hi [~jingzhao], thanks for the review. I agree that we should not make the 
retry policies stateful, thanks for pointing this out. We may want to address 
the timing problems as a new feature, rather than this quick fix. 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2014-12-12 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244963#comment-14244963
 ] 

Li Lu commented on HADOOP-11398:


The three findbugs warnings are not introduced by this Jira. There are on-going 
work to fix them in separate Jiras. 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2014-12-11 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11398:
---
Status: Patch Available  (was: Open)

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2014-12-11 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11398:
---
Attachment: HADOOP-11398-121114.patch

In this patch I added one more check to RetryUpToMaximumTimeWithFixedSleep's 
shouldRetry method. With this design, a RetryUpToMaximumTimeWithFixedSleep only 
launches retries before the time window with size maxTime ends. I also added 
two more unit tests to verify this: one to make sure if the underlying retry 
process is delayed, we no longer launch new retries outside the allowed time 
window and one to make sure if the time window size is sufficient, a 
RetryUpToMaximumTimeWithFixedSleep can finish successfully. 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2014-12-11 Thread Li Lu (JIRA)
Li Lu created HADOOP-11398:
--

 Summary: RetryUpToMaximumTimeWithFixedSleep needs to behave more 
accurately
 Key: HADOOP-11398
 URL: https://issues.apache.org/jira/browse/HADOOP-11398
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


RetryUpToMaximumTimeWithFixedSleep now inherits 
RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
maxRetries. The current implementation uses (maxTime / sleepTime) as the number 
of maxRetries. This is fine if the actual for each retry is significantly less 
than the sleep time, but it becomes less accurate if each retry takes 
comparable amount of time as the sleep time. The problem gets worse when there 
are underlying retries. 

We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to perform 
accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11386) Replace \n by %n in format hadoop-common format strings

2014-12-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11386:
---
Status: Patch Available  (was: Open)

> Replace \n by %n in format hadoop-common format strings
> ---
>
> Key: HADOOP-11386
> URL: https://issues.apache.org/jira/browse/HADOOP-11386
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11386-120914.patch
>
>
> It is recommended to use '%n' in format strings. We may want to replace all 
> '\n' in hadoop-common. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11386) Replace \n by %n in format hadoop-common format strings

2014-12-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11386:
---
Status: Open  (was: Patch Available)

> Replace \n by %n in format hadoop-common format strings
> ---
>
> Key: HADOOP-11386
> URL: https://issues.apache.org/jira/browse/HADOOP-11386
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11386-120914.patch
>
>
> It is recommended to use '%n' in format strings. We may want to replace all 
> '\n' in hadoop-common. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11388) Remove deprecated o.a.h.metrics.file.FileContext

2014-12-10 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241703#comment-14241703
 ] 

Li Lu commented on HADOOP-11388:


Could not reproduce the eclipse failure locally. 

> Remove deprecated o.a.h.metrics.file.FileContext
> 
>
> Key: HADOOP-11388
> URL: https://issues.apache.org/jira/browse/HADOOP-11388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Minor
> Attachments: HADOOP-11388-121014.patch
>
>
> The {{o.a.h.metrics.file.FileContext}} has been deprecated. This jira 
> proposes to remove it from the repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11388) Remove deprecated o.a.h.metrics.file.FileContext

2014-12-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11388:
---
Status: Patch Available  (was: Open)

> Remove deprecated o.a.h.metrics.file.FileContext
> 
>
> Key: HADOOP-11388
> URL: https://issues.apache.org/jira/browse/HADOOP-11388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Minor
> Attachments: HADOOP-11388-121014.patch
>
>
> The {{o.a.h.metrics.file.FileContext}} has been deprecated. This jira 
> proposes to remove it from the repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11388) Remove deprecated o.a.h.metrics.file.FileContext

2014-12-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11388:
---
Attachment: HADOOP-11388-121014.patch

In this patch I removed the deprecated o.a.h.metrics.file.FileContext. 

> Remove deprecated o.a.h.metrics.file.FileContext
> 
>
> Key: HADOOP-11388
> URL: https://issues.apache.org/jira/browse/HADOOP-11388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Minor
> Attachments: HADOOP-11388-121014.patch
>
>
> The {{o.a.h.metrics.file.FileContext}} has been deprecated. This jira 
> proposes to remove it from the repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11386) Replace \n by %n in format hadoop-common format strings

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11386:
---
Status: Patch Available  (was: Open)

> Replace \n by %n in format hadoop-common format strings
> ---
>
> Key: HADOOP-11386
> URL: https://issues.apache.org/jira/browse/HADOOP-11386
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11386-120914.patch
>
>
> It is recommended to use '%n' in format strings. We may want to replace all 
> '\n' in hadoop-common. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11386) Replace \n by %n in format hadoop-common format strings

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11386:
---
Attachment: HADOOP-11386-120914.patch

In this patch I changed all '\n' in format strings in hadoop-common into '%n'. 

> Replace \n by %n in format hadoop-common format strings
> ---
>
> Key: HADOOP-11386
> URL: https://issues.apache.org/jira/browse/HADOOP-11386
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11386-120914.patch
>
>
> It is recommended to use '%n' in format strings. We may want to replace all 
> '\n' in hadoop-common. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11386) Replace \n by %n in format hadoop-common format strings

2014-12-09 Thread Li Lu (JIRA)
Li Lu created HADOOP-11386:
--

 Summary: Replace \n by %n in format hadoop-common format strings
 Key: HADOOP-11386
 URL: https://issues.apache.org/jira/browse/HADOOP-11386
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


It is recommended to use '%n' in format strings. We may want to replace all 
'\n' in hadoop-common. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11384) Fix new findbugs warnings in hadoop-openstack

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu resolved HADOOP-11384.

Resolution: Duplicate

Duplicate to HADOOP-11381

> Fix new findbugs warnings in hadoop-openstack
> -
>
> Key: HADOOP-11384
> URL: https://issues.apache.org/jira/browse/HADOOP-11384
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-openstack. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11382) Fix new findbugs warnings in hadoop-aws

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu resolved HADOOP-11382.

Resolution: Duplicate

> Fix new findbugs warnings in hadoop-aws
> ---
>
> Key: HADOOP-11382
> URL: https://issues.apache.org/jira/browse/HADOOP-11382
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-aws



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11383) Fix new findbugs warnings in hadoop-azure

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu resolved HADOOP-11383.

Resolution: Duplicate

Duplicate to HADOOP-11381

> Fix new findbugs warnings in hadoop-azure
> -
>
> Key: HADOOP-11383
> URL: https://issues.apache.org/jira/browse/HADOOP-11383
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-azure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11381) Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, and hadoop-openstack

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11381:
---
Status: Patch Available  (was: Open)

> Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, and 
> hadoop-openstack
> --
>
> Key: HADOOP-11381
> URL: https://issues.apache.org/jira/browse/HADOOP-11381
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
> Attachments: HADOOP-11381-120914.patch
>
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-distcp, hadoop-aws, hadoop-azure, 
> and hadoop-openstack. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11381) Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, and hadoop-openstack

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11381:
---
Attachment: HADOOP-11381-120914.patch

In this patch I addressed findbugs 3 warnings in hadoop-distcp, -aws, -azure, 
and -openstack. Most warnings are encoding or new line formatting related, but 
one warning in hadoop-azure may cause a race. In my understanding, 
SelfRenewingLease(CloudBlobWrapper) in SelfRenewingLease.java may be called 
concurrently (threadNumber was volatile), and the previous threadNumber++ 
operation is not atomic. I replaced it with AtomicInteger. 

> Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, and 
> hadoop-openstack
> --
>
> Key: HADOOP-11381
> URL: https://issues.apache.org/jira/browse/HADOOP-11381
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
> Attachments: HADOOP-11381-120914.patch
>
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-distcp, hadoop-aws, hadoop-azure, 
> and hadoop-openstack. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11381) Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, and hadoop-openstack

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11381:
---
Description: When locally run findbugs 3.0, there are new warnings 
generated. This Jira aims to address the new warnings in hadoop-distcp, 
hadoop-aws, hadoop-azure, and hadoop-openstack.   (was: When locally run 
findbugs 3.0, there are new warnings generated. This Jira aims to address the 
new warnings in hadoop-distcp)

> Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, and 
> hadoop-openstack
> --
>
> Key: HADOOP-11381
> URL: https://issues.apache.org/jira/browse/HADOOP-11381
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-distcp, hadoop-aws, hadoop-azure, 
> and hadoop-openstack. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11381) Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, and hadoop-openstack

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11381:
---
Summary: Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, 
and hadoop-openstack  (was: Fix findbugs warnings in hadoop-distcp)

> Fix findbugs warnings in hadoop-distcp, hadoop-aws, hadoop-azure, and 
> hadoop-openstack
> --
>
> Key: HADOOP-11381
> URL: https://issues.apache.org/jira/browse/HADOOP-11381
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-distcp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11379) Fix new findbugs warnings in hadoop-auth*

2014-12-09 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14240151#comment-14240151
 ] 

Li Lu commented on HADOOP-11379:


[~sureshms], sure, I'll consolidate the rest of them into one Jira. 

> Fix new findbugs warnings in hadoop-auth*
> -
>
> Key: HADOOP-11379
> URL: https://issues.apache.org/jira/browse/HADOOP-11379
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
> Fix For: 2.7.0
>
> Attachments: HADOOP-11379-120914.patch
>
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-auth and hadoop-auth-examples. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10477) Clean up findbug warnings found by findbugs 3.0.0

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-10477:
---
Description: This is an umbrella jira to clean up the new findbug warnings 
found by findbugs 3.0.0.  (was: This is an umbrella jira to clean up the new 
findbug warnings found by findbugs 2.0.2.)

> Clean up findbug warnings found by findbugs 3.0.0
> -
>
> Key: HADOOP-10477
> URL: https://issues.apache.org/jira/browse/HADOOP-10477
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This is an umbrella jira to clean up the new findbug warnings found by 
> findbugs 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11384) Fix new findbugs warnings in hadoop-openstack

2014-12-09 Thread Li Lu (JIRA)
Li Lu created HADOOP-11384:
--

 Summary: Fix new findbugs warnings in hadoop-openstack
 Key: HADOOP-11384
 URL: https://issues.apache.org/jira/browse/HADOOP-11384
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu


When locally run findbugs 3.0, there are new warnings generated. This Jira aims 
to address the new warnings in hadoop-openstack. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11383) Fix new findbugs warnings in hadoop-azure

2014-12-09 Thread Li Lu (JIRA)
Li Lu created HADOOP-11383:
--

 Summary: Fix new findbugs warnings in hadoop-azure
 Key: HADOOP-11383
 URL: https://issues.apache.org/jira/browse/HADOOP-11383
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu


When locally run findbugs 3.0, there are new warnings generated. This Jira aims 
to address the new warnings in hadoop-azure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11381) Fix findbugs warnings in hadoop-distcp

2014-12-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11381:
---
Labels: findbugs  (was: )

> Fix findbugs warnings in hadoop-distcp
> --
>
> Key: HADOOP-11381
> URL: https://issues.apache.org/jira/browse/HADOOP-11381
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: findbugs
>
> When locally run findbugs 3.0, there are new warnings generated. This Jira 
> aims to address the new warnings in hadoop-distcp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11382) Fix new findbugs warnings in hadoop-aws

2014-12-09 Thread Li Lu (JIRA)
Li Lu created HADOOP-11382:
--

 Summary: Fix new findbugs warnings in hadoop-aws
 Key: HADOOP-11382
 URL: https://issues.apache.org/jira/browse/HADOOP-11382
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu


When locally run findbugs 3.0, there are new warnings generated. This Jira aims 
to address the new warnings in hadoop-aws



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11381) Fix findbugs warnings in hadoop-distcp

2014-12-09 Thread Li Lu (JIRA)
Li Lu created HADOOP-11381:
--

 Summary: Fix findbugs warnings in hadoop-distcp
 Key: HADOOP-11381
 URL: https://issues.apache.org/jira/browse/HADOOP-11381
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu


When locally run findbugs 3.0, there are new warnings generated. This Jira aims 
to address the new warnings in hadoop-distcp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >