[jira] [Updated] (HADOOP-8929) Add toString, other improvements for SampleQuantiles
[ https://issues.apache.org/jira/browse/HADOOP-8929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HADOOP-8929: Resolution: Fixed Fix Version/s: 2.0.3-alpha 3.0.0 Target Version/s: 2.0.2-alpha, 3.0.0 (was: 3.0.0, 2.0.2-alpha) Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to branch-2 and trunk, thanks for the reviews. Add toString, other improvements for SampleQuantiles Key: HADOOP-8929 URL: https://issues.apache.org/jira/browse/HADOOP-8929 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 3.0.0, 2.0.2-alpha Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 3.0.0, 2.0.3-alpha Attachments: hadoop-8929.txt The new SampleQuantiles class is useful in the context of benchmarks, but currently there is no way to print it out outside the context of a metrics sink. It would be nice to have a convenient way to stringify it for logging, etc. Also: - made it Comparable and changed the HashMap to TreeMap so that the printout is in ascending percentile order. Given that this map is always very small, and snapshot() is only called once a minute or so, the runtime/memory differences between treemap and hashmap should be negligible. - changed the behavior to return null instead of throw, because all the catching, etc, got pretty ugly. In implementing toString, I figured I'd clean up the other behavior along the way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8567) Backport conf servlet with dump running configuration to branch 1.x
[ https://issues.apache.org/jira/browse/HADOOP-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476779#comment-13476779 ] Junping Du commented on HADOOP-8567: Sorry. I owe this patch for a long time. Thanks for delivering this patch. I will help on review it. Backport conf servlet with dump running configuration to branch 1.x --- Key: HADOOP-8567 URL: https://issues.apache.org/jira/browse/HADOOP-8567 Project: Hadoop Common Issue Type: New Feature Components: conf Affects Versions: 1.0.0 Reporter: Junping Du Assignee: Junping Du Attachments: Hadoop.8567.branch-1.001.patch HADOOP-6408 provide conf servlet that can dump running configuration which great helps admin to trouble shooting the configuration issue. However, that patch works on branch after 0.21 only and should be backport to branch 1.x. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HADOOP-8567) Backport conf servlet with dump running configuration to branch 1.x
[ https://issues.apache.org/jira/browse/HADOOP-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reassigned HADOOP-8567: -- Assignee: Jing Zhao (was: Junping Du) Backport conf servlet with dump running configuration to branch 1.x --- Key: HADOOP-8567 URL: https://issues.apache.org/jira/browse/HADOOP-8567 Project: Hadoop Common Issue Type: New Feature Components: conf Affects Versions: 1.0.0 Reporter: Junping Du Assignee: Jing Zhao Attachments: Hadoop.8567.branch-1.001.patch HADOOP-6408 provide conf servlet that can dump running configuration which great helps admin to trouble shooting the configuration issue. However, that patch works on branch after 0.21 only and should be backport to branch 1.x. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8929) Add toString, other improvements for SampleQuantiles
[ https://issues.apache.org/jira/browse/HADOOP-8929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476785#comment-13476785 ] Hudson commented on HADOOP-8929: Integrated in Hadoop-trunk-Commit #2868 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/2868/]) HADOOP-8929. Add toString, other improvements for SampleQuantiles. Contributed by Todd Lipcon. (Revision 1398658) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1398658 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Quantile.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/SampleQuantiles.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/util/TestSampleQuantiles.java Add toString, other improvements for SampleQuantiles Key: HADOOP-8929 URL: https://issues.apache.org/jira/browse/HADOOP-8929 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 3.0.0, 2.0.2-alpha Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 3.0.0, 2.0.3-alpha Attachments: hadoop-8929.txt The new SampleQuantiles class is useful in the context of benchmarks, but currently there is no way to print it out outside the context of a metrics sink. It would be nice to have a convenient way to stringify it for logging, etc. Also: - made it Comparable and changed the HashMap to TreeMap so that the printout is in ascending percentile order. Given that this map is always very small, and snapshot() is only called once a minute or so, the runtime/memory differences between treemap and hashmap should be negligible. - changed the behavior to return null instead of throw, because all the catching, etc, got pretty ugly. In implementing toString, I figured I'd clean up the other behavior along the way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Hardy updated HADOOP-8922: - Status: Open (was: Patch Available) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Hardy updated HADOOP-8922: - Attachment: (was: HADOOP-8922-2.patch) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476795#comment-13476795 ] Gopal V commented on HADOOP-8926: - Sure, I will fix the readability of the patch by adding T8_[0-7]_start and re-run it through the JIT to make sure the class variables are getting inlined in the math. If that still results in a register splill, I will move them out of the loop as local final variables re-submit a patch today. hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Attachments: crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Hardy updated HADOOP-8922: - Attachment: HADOOP-8922-3.patch New patch correcting spelling in javadoc. Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Hardy updated HADOOP-8922: - Status: Patch Available (was: Open) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7105) [IPC] Improvement of lock mechanism in Listener and Reader thread
[ https://issues.apache.org/jira/browse/HADOOP-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476805#comment-13476805 ] luoli commented on HADOOP-7105: --- jinglong, which issue? a link please? [IPC] Improvement of lock mechanism in Listener and Reader thread - Key: HADOOP-7105 URL: https://issues.apache.org/jira/browse/HADOOP-7105 Project: Hadoop Common Issue Type: Improvement Components: ipc Affects Versions: 0.21.0 Reporter: jinglong.liujl Attachments: improveListenerLock2.patch, improveListenerLock.patch In many client cocurrent access, single thread Listener will become bottleneck. Many client can't be served, and get connection time out. To improve Listener capacity, we make 2 modification. 1. Tuning ipc.server.listen.queue.size to a larger value to avoid client retry. 2. In currently implement, Listener will call registerChannel(), and finishAdd() in Reader, which will request Reader synchronized lock. Listener will cost too many time in waiting for this lock. We have made test, ./bin/hadoop org.apache.hadoop.hdfs.NNThroughputBenchmark -op create -threads 1 -files 1 case 1 : Currently can not pass. and report hadoop-rd101.jx.baidu.com/10.65.25.166:59310. Already tried 0 time(s). case 2 : tuning back log to 10240 average cost : 1285.72 ms case 3 : tuning back log to 10240 , and improve lock mechanism in patch average cost : 941.32 ms performance in average cost will improve 26% -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476822#comment-13476822 ] Hadoop QA commented on HADOOP-8922: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12549269/HADOOP-8922-3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1631//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1631//console This message is automatically generated. Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8921) ant build.xml in branch-1 ignores -Dcompile.native
[ https://issues.apache.org/jira/browse/HADOOP-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476877#comment-13476877 ] Gopal V commented on HADOOP-8921: - I've personally wasted over an hour to figure out that option has to be explicitly disabled before it skips running autoconf/automake (in create-configure part). The older approach was bad for someone who did a git checkout and ran a build without paying attention to the docs about native library compatibility. I'd say the optimizations (i.e non-core features) can be skipped over for a clean build on platforms where the code won't compile / isn't supported. Every single attempt to build on a Mac would result in a failed compilation, until someone discovers the -Dcompile.native option. Clean builds on ant compile should be encouraged (on any platform) without RTFMing for a -D option. Of course on platforms where the native code is indeed supported, it would error out if say, JNI headers can't be found. ant build.xml in branch-1 ignores -Dcompile.native -- Key: HADOOP-8921 URL: https://issues.apache.org/jira/browse/HADOOP-8921 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 1.2.0 Environment: Mac OS X 10.7.4 Reporter: Gopal V Priority: Trivial Labels: ant, autoconf, patch Attachments: HADOOP-8921.4.patch ant -Dcompile.native=false still runs autoconf and libtoolize According to ant 1.8 manual, any target if conditions are checked only after the dependencies are run through. The current if condition in code fails to prevent the autoconf/libtool components from running. Fixing it by moving the if condition up into the compile-native target and changing it to a param substitution instead of being evaluated as a condition. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8920) Add more javadoc to metrics2 related classes
[ https://issues.apache.org/jira/browse/HADOOP-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476897#comment-13476897 ] Hudson commented on HADOOP-8920: Integrated in Hadoop-Yarn-trunk #5 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/5/]) HADOOP-8920. Add more javadoc to metrics2 related classes. Contributed by Suresh Srinivas. (Revision 1398640) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1398640 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsCollector.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsFilter.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsInfo.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsRecord.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsRecordBuilder.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsSink.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsSource.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsSystem.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/annotation/Metric.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsRecordBuilderImpl.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsRecordImpl.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/DefaultMetricsSystem.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MetricsSourceBuilder.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableMetricsFactory.java Add more javadoc to metrics2 related classes Key: HADOOP-8920 URL: https://issues.apache.org/jira/browse/HADOOP-8920 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 1.0.0, 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Suresh Srinivas Assignee: Suresh Srinivas Priority: Minor Fix For: 3.0.0 Attachments: HADOOP-8920.patch Metrics2 related code can is very sparsely documented. Here is patch that adds javadoc that should help some of the code easier to browse and understand. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8929) Add toString, other improvements for SampleQuantiles
[ https://issues.apache.org/jira/browse/HADOOP-8929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476899#comment-13476899 ] Hudson commented on HADOOP-8929: Integrated in Hadoop-Yarn-trunk #5 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/5/]) HADOOP-8929. Add toString, other improvements for SampleQuantiles. Contributed by Todd Lipcon. (Revision 1398658) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1398658 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Quantile.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/SampleQuantiles.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/util/TestSampleQuantiles.java Add toString, other improvements for SampleQuantiles Key: HADOOP-8929 URL: https://issues.apache.org/jira/browse/HADOOP-8929 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 3.0.0, 2.0.2-alpha Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 3.0.0, 2.0.3-alpha Attachments: hadoop-8929.txt The new SampleQuantiles class is useful in the context of benchmarks, but currently there is no way to print it out outside the context of a metrics sink. It would be nice to have a convenient way to stringify it for logging, etc. Also: - made it Comparable and changed the HashMap to TreeMap so that the printout is in ascending percentile order. Given that this map is always very small, and snapshot() is only called once a minute or so, the runtime/memory differences between treemap and hashmap should be negligible. - changed the behavior to return null instead of throw, because all the catching, etc, got pretty ugly. In implementing toString, I figured I'd clean up the other behavior along the way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-8926: Status: Open (was: Patch Available) Reworking for readability of code hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Attachments: crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-8926: Attachment: crc32-faster+readable.patch Rewrite the core loop for readability turn all loop locals into final variables. Modify the small loop into a switch statement (java tableswitch instruction) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Attachments: crc32-faster+readable.patch, crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476916#comment-13476916 ] Gopal V commented on HADOOP-8926: - On x86_64 on an ec2 m1.xl (after changes) Performance Table (The unit is MB/sec) || Num Bytes ||CRC32 || PureJavaCrc32 || | 1 | 9.799 | 72.921 | | 2 |18.850 |177.113 | | 4 |42.687 |214.704 | | 8 |70.552 |318.484 | | 16 | 111.875 |416.191 | | 32 | 153.779 |496.209 | | 64 | 190.493 |544.428 | |128 | 215.851 |564.414 | |256 | 232.110 |590.515 | |512 | 240.359 |581.974 | | 1024 | 244.682 |597.676 | | 2048 | 246.642 |599.621 | | 4096 | 249.438 |604.247 | | 8192 | 249.247 |605.547 | | 16384 | 249.524 |606.494 | | 32768 | 249.508 |602.449 | | 65536 | 250.977 |604.064 | | 131072 | 249.678 |597.944 | | 262144 | 249.505 |603.270 | | 524288 | 250.805 |602.656 | |1048576 | 250.900 |602.949 | |2097152 | 250.137 |601.563 | |4194304 | 249.406 |602.058 | |8388608 | 249.937 |598.310 | | 16777216 | 249.892 |592.417 | hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Attachments: crc32-faster+readable.patch, crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-8926: Status: Patch Available (was: Open) Updated for readability and fewer instructions in the inner loop. hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Attachments: crc32-faster+readable.patch, crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476945#comment-13476945 ] Hadoop QA commented on HADOOP-8926: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12549285/crc32-faster%2Breadable.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1632//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1632//console This message is automatically generated. hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Attachments: crc32-faster+readable.patch, crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477039#comment-13477039 ] Robert Joseph Evans commented on HADOOP-8926: - The changes look good to me +1. Gopal, what versions of Hadoop are you targeting with this change? hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Attachments: crc32-faster+readable.patch, crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477059#comment-13477059 ] Robert Joseph Evans commented on HADOOP-8922: - I am a +1 for the change too. It looks good thanks Damien. What versions of hadoop were you targeting with this patch? Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Moved] (HADOOP-8932) JNI-based user-group mapping modules can be too chatty on lookup failures
[ https://issues.apache.org/jira/browse/HADOOP-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee moved HDFS-4064 to HADOOP-8932: -- Component/s: (was: security) security Target Version/s: (was: 3.0.0, 2.0.3-alpha, 0.23.5) Affects Version/s: (was: 0.23.5) (was: 2.0.3-alpha) (was: 3.0.0) 0.23.5 2.0.3-alpha 3.0.0 Key: HADOOP-8932 (was: HDFS-4064) Project: Hadoop Common (was: Hadoop HDFS) JNI-based user-group mapping modules can be too chatty on lookup failures - Key: HADOOP-8932 URL: https://issues.apache.org/jira/browse/HADOOP-8932 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Kihwal Lee Assignee: Kihwal Lee On a user/group lookup failure, JniBasedUnixGroupsMapping and JniBasedUnixGroupsNetgroupMapping are logging the full stack trace at WARN level. Since the caller of these methods is already logging errors, this is not needed. In branch-1, just one line is logged, so we don't need this change there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8932) JNI-based user-group mapping modules can be too chatty on lookup failures
[ https://issues.apache.org/jira/browse/HADOOP-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HADOOP-8932: --- Attachment: hadoop-8932.patch.txt JNI-based user-group mapping modules can be too chatty on lookup failures - Key: HADOOP-8932 URL: https://issues.apache.org/jira/browse/HADOOP-8932 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Kihwal Lee Assignee: Kihwal Lee Attachments: hadoop-8932.patch.txt On a user/group lookup failure, JniBasedUnixGroupsMapping and JniBasedUnixGroupsNetgroupMapping are logging the full stack trace at WARN level. Since the caller of these methods is already logging errors, this is not needed. In branch-1, just one line is logged, so we don't need this change there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8932) JNI-based user-group mapping modules can be too chatty on lookup failures
[ https://issues.apache.org/jira/browse/HADOOP-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HADOOP-8932: --- Status: Patch Available (was: Open) JNI-based user-group mapping modules can be too chatty on lookup failures - Key: HADOOP-8932 URL: https://issues.apache.org/jira/browse/HADOOP-8932 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Kihwal Lee Assignee: Kihwal Lee Attachments: hadoop-8932.patch.txt On a user/group lookup failure, JniBasedUnixGroupsMapping and JniBasedUnixGroupsNetgroupMapping are logging the full stack trace at WARN level. Since the caller of these methods is already logging errors, this is not needed. In branch-1, just one line is logged, so we don't need this change there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Hardy updated HADOOP-8922: - Affects Version/s: 2.0.0-alpha Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477065#comment-13477065 ] Damien Hardy commented on HADOOP-8922: -- I work on CDH4.1 now, so next release would be great :) 2.0.0 is the current version I use but there is only 2.0.0-alpha in propositions As a newbie, I don't really now how this value could be set (Fix ?). Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8883) Anonymous fallback in KerberosAuthenticator is broken
[ https://issues.apache.org/jira/browse/HADOOP-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477074#comment-13477074 ] Alejandro Abdelnur commented on HADOOP-8883: +1 Anonymous fallback in KerberosAuthenticator is broken - Key: HADOOP-8883 URL: https://issues.apache.org/jira/browse/HADOOP-8883 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.3-alpha Reporter: Robert Kanter Assignee: Robert Kanter Labels: security Fix For: 2.0.3-alpha Attachments: HADOOP-8883.patch HADOOP-8855 changed KerberosAuthenticator to handle when the JDK did the SPNEGO already; but this change broke using the fallback authenticator (PseudoAuthenticator) with an anonymous user (see OOZIE-1010). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8932) JNI-based user-group mapping modules can be too chatty on lookup failures
[ https://issues.apache.org/jira/browse/HADOOP-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477078#comment-13477078 ] Hadoop QA commented on HADOOP-8932: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12549319/hadoop-8932.patch.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1633//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1633//console This message is automatically generated. JNI-based user-group mapping modules can be too chatty on lookup failures - Key: HADOOP-8932 URL: https://issues.apache.org/jira/browse/HADOOP-8932 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Kihwal Lee Assignee: Kihwal Lee Attachments: hadoop-8932.patch.txt On a user/group lookup failure, JniBasedUnixGroupsMapping and JniBasedUnixGroupsNetgroupMapping are logging the full stack trace at WARN level. Since the caller of these methods is already logging errors, this is not needed. In branch-1, just one line is logged, so we don't need this change there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8932) JNI-based user-group mapping modules can be too chatty on lookup failures
[ https://issues.apache.org/jira/browse/HADOOP-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477094#comment-13477094 ] Kihwal Lee commented on HADOOP-8932: No test is included since the change is about log message. JNI-based user-group mapping modules can be too chatty on lookup failures - Key: HADOOP-8932 URL: https://issues.apache.org/jira/browse/HADOOP-8932 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Kihwal Lee Assignee: Kihwal Lee Attachments: hadoop-8932.patch.txt On a user/group lookup failure, JniBasedUnixGroupsMapping and JniBasedUnixGroupsNetgroupMapping are logging the full stack trace at WARN level. Since the caller of these methods is already logging errors, this is not needed. In branch-1, just one line is logged, so we don't need this change there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh
[ https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477100#comment-13477100 ] Suresh Srinivas commented on HADOOP-8924: - +1 for the patch. I will commit it to branch-trunk-win. Hadoop Common creating package-info.java must not depend on sh -- Key: HADOOP-8924 URL: https://issues.apache.org/jira/browse/HADOOP-8924 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: trunk-win Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HADOOP-8924-branch-trunk-win.patch Currently, the build process relies on saveVersion.sh to generate package-info.java with a version annotation. The sh binary may not be available on all developers' machines (e.g. Windows without Cygwin). This issue tracks removal of that dependency in Hadoop Common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh
[ https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HADOOP-8924. - Resolution: Fixed Fix Version/s: trunk-win Hadoop Flags: Reviewed I committed the patch to branch-trunk-win. Thank you Chris. Hadoop Common creating package-info.java must not depend on sh -- Key: HADOOP-8924 URL: https://issues.apache.org/jira/browse/HADOOP-8924 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: trunk-win Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: trunk-win Attachments: HADOOP-8924-branch-trunk-win.patch Currently, the build process relies on saveVersion.sh to generate package-info.java with a version annotation. The sh binary may not be available on all developers' machines (e.g. Windows without Cygwin). This issue tracks removal of that dependency in Hadoop Common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8932) JNI-based user-group mapping modules can be too chatty on lookup failures
[ https://issues.apache.org/jira/browse/HADOOP-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477110#comment-13477110 ] Suresh Srinivas commented on HADOOP-8932: - +1 for the change. JNI-based user-group mapping modules can be too chatty on lookup failures - Key: HADOOP-8932 URL: https://issues.apache.org/jira/browse/HADOOP-8932 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Kihwal Lee Assignee: Kihwal Lee Attachments: hadoop-8932.patch.txt On a user/group lookup failure, JniBasedUnixGroupsMapping and JniBasedUnixGroupsNetgroupMapping are logging the full stack trace at WARN level. Since the caller of these methods is already logging errors, this is not needed. In branch-1, just one line is logged, so we don't need this change there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8933) test-patch.sh fails erroneously on platforms that can't build native
Chris Nauroth created HADOOP-8933: - Summary: test-patch.sh fails erroneously on platforms that can't build native Key: HADOOP-8933 URL: https://issues.apache.org/jira/browse/HADOOP-8933 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth If a developer is working on a platform that can't build native (like OS X right now), then test-patch.sh will report the patch as a failure due to The patch appears to cause the build to fail. This is incorrect, because the developer's patch didn't cause the build to fail. Adding an extra optional flag to test-patch.sh would help developers on these platforms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8933) test-patch.sh fails erroneously on platforms that can't build native
[ https://issues.apache.org/jira/browse/HADOOP-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477146#comment-13477146 ] Chris Nauroth commented on HADOOP-8933: --- Currently, I see this problem on Mac OS X due to native code incompatibilities, like HADOOP-7147. The root cause is that when the script pre-builds the existing trunk code, it does not use -Pnative, but when it builds after applying the patch, it does use -Pnative. Note that we don't want to remove -Pnative completely, because that was added to fix a bug reported in HADOOP-8488. Instead, if we provided an additional optional flag for developers to disable the native portion of the build, then it would still catch problems in the native code by default, but developers on non-compatible platforms would still have a way to use test-patch.sh as long as they are not working on the native code. test-patch.sh fails erroneously on platforms that can't build native Key: HADOOP-8933 URL: https://issues.apache.org/jira/browse/HADOOP-8933 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth If a developer is working on a platform that can't build native (like OS X right now), then test-patch.sh will report the patch as a failure due to The patch appears to cause the build to fail. This is incorrect, because the developer's patch didn't cause the build to fail. Adding an extra optional flag to test-patch.sh would help developers on these platforms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8932) JNI-based user-group mapping modules can be too chatty on lookup failures
[ https://issues.apache.org/jira/browse/HADOOP-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477148#comment-13477148 ] Suresh Srinivas commented on HADOOP-8932: - I will commit this patch soon. JNI-based user-group mapping modules can be too chatty on lookup failures - Key: HADOOP-8932 URL: https://issues.apache.org/jira/browse/HADOOP-8932 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Kihwal Lee Assignee: Kihwal Lee Attachments: hadoop-8932.patch.txt On a user/group lookup failure, JniBasedUnixGroupsMapping and JniBasedUnixGroupsNetgroupMapping are logging the full stack trace at WARN level. Since the caller of these methods is already logging errors, this is not needed. In branch-1, just one line is logged, so we don't need this change there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8931) Add Java version to startup message
[ https://issues.apache.org/jira/browse/HADOOP-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477152#comment-13477152 ] Steve Loughran commented on HADOOP-8931: java.home already gets printed, doesn't it? Add Java version to startup message Key: HADOOP-8931 URL: https://issues.apache.org/jira/browse/HADOOP-8931 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Trivial Attachments: hadoop-8931.txt I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8931) Add Java version to startup message
[ https://issues.apache.org/jira/browse/HADOOP-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477155#comment-13477155 ] Aaron T. Myers commented on HADOOP-8931: +1 Add Java version to startup message Key: HADOOP-8931 URL: https://issues.apache.org/jira/browse/HADOOP-8931 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Trivial Attachments: hadoop-8931.txt I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8932) JNI-based user-group mapping modules can be too chatty on lookup failures
[ https://issues.apache.org/jira/browse/HADOOP-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8932: Resolution: Fixed Fix Version/s: 0.23.5 2.0.3-alpha 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch to branch-2, 0.23 and trunk. Thank you Kihwal. JNI-based user-group mapping modules can be too chatty on lookup failures - Key: HADOOP-8932 URL: https://issues.apache.org/jira/browse/HADOOP-8932 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5 Reporter: Kihwal Lee Assignee: Kihwal Lee Fix For: 3.0.0, 2.0.3-alpha, 0.23.5 Attachments: hadoop-8932.patch.txt On a user/group lookup failure, JniBasedUnixGroupsMapping and JniBasedUnixGroupsNetgroupMapping are logging the full stack trace at WARN level. Since the caller of these methods is already logging errors, this is not needed. In branch-1, just one line is logged, so we don't need this change there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8931) Add Java version to startup message
[ https://issues.apache.org/jira/browse/HADOOP-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477158#comment-13477158 ] Arpit Gupta commented on HADOOP-8931: - can we also port this to branch-1? Add Java version to startup message Key: HADOOP-8931 URL: https://issues.apache.org/jira/browse/HADOOP-8931 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Trivial Attachments: hadoop-8931.txt I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8933) test-patch.sh fails erroneously on platforms that can't build native
[ https://issues.apache.org/jira/browse/HADOOP-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-8933: -- Status: Patch Available (was: Open) test-patch.sh fails erroneously on platforms that can't build native Key: HADOOP-8933 URL: https://issues.apache.org/jira/browse/HADOOP-8933 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HADOOP-8933.patch If a developer is working on a platform that can't build native (like OS X right now), then test-patch.sh will report the patch as a failure due to The patch appears to cause the build to fail. This is incorrect, because the developer's patch didn't cause the build to fail. Adding an extra optional flag to test-patch.sh would help developers on these platforms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8933) test-patch.sh fails erroneously on platforms that can't build native
[ https://issues.apache.org/jira/browse/HADOOP-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-8933: -- Attachment: HADOOP-8933.patch The attached patch introduces an optional --no-native flag. By default, the script still builds native like usual. With this optional flag, the script will not include -Pnative in the mvn calls. I tested this on OS X and Ubuntu. On OS X, test-patch.sh succeeds if I use --no-native. On Ubuntu, I confirmed that it still builds libhadoop.so by default if you don't specify --no-native. Jenkins will give this a -1 for lack of new tests. That's because the patch only changes build scripts. test-patch.sh fails erroneously on platforms that can't build native Key: HADOOP-8933 URL: https://issues.apache.org/jira/browse/HADOOP-8933 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HADOOP-8933.patch If a developer is working on a platform that can't build native (like OS X right now), then test-patch.sh will report the patch as a failure due to The patch appears to cause the build to fail. This is incorrect, because the developer's patch didn't cause the build to fail. Adding an extra optional flag to test-patch.sh would help developers on these platforms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8923) WEBUI shows an intermediatory page when the cookie expires.
[ https://issues.apache.org/jira/browse/HADOOP-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477165#comment-13477165 ] Hudson commented on HADOOP-8923: Integrated in Hadoop-trunk-Commit #2870 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/2870/]) HADOOP-8923. JNI-based user-group mapping modules can be too chatty on lookup failures. Contributed by Kihwal Lee. (Revision 1398883) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1398883 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMapping.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.java WEBUI shows an intermediatory page when the cookie expires. --- Key: HADOOP-8923 URL: https://issues.apache.org/jira/browse/HADOOP-8923 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 1.1.0 Reporter: Benoy Antony Assignee: Benoy Antony Priority: Minor Attachments: HADOOP-8923.patch The WEBUI does Authentication (SPNEGO/Custom) and then drops a cookie. Once the cookie expires, the webui displays a page saying that authentication token expired. The user has to refresh the page to get authenticated again. This page can be avoided and the user can authenticated without showing such a page to the user. Also the when the cookie expires, a warning is logged. But there is no need to log this as this is not of any significance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8933) test-patch.sh fails erroneously on platforms that can't build native
[ https://issues.apache.org/jira/browse/HADOOP-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477182#comment-13477182 ] Hadoop QA commented on HADOOP-8933: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12549339/HADOOP-8933.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1634//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1634//console This message is automatically generated. test-patch.sh fails erroneously on platforms that can't build native Key: HADOOP-8933 URL: https://issues.apache.org/jira/browse/HADOOP-8933 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HADOOP-8933.patch If a developer is working on a platform that can't build native (like OS X right now), then test-patch.sh will report the patch as a failure due to The patch appears to cause the build to fail. This is incorrect, because the developer's patch didn't cause the build to fail. Adding an extra optional flag to test-patch.sh would help developers on these platforms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh
[ https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477200#comment-13477200 ] Aaron T. Myers commented on HADOOP-8924: Hey guys, does it really make sense to trade a dependency on sh for a dependency on python? Maybe it does, but at least on Unix systems I feel like sh is more likely to be available than python. (Honest question here - not trying to be a pain.) At the very least, if we stick with this being in python, we should update BUILDING.txt to say that we now have a dependency on python (and perhaps some specific version of python?) in order to build Hadoop. Hadoop Common creating package-info.java must not depend on sh -- Key: HADOOP-8924 URL: https://issues.apache.org/jira/browse/HADOOP-8924 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: trunk-win Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: trunk-win Attachments: HADOOP-8924-branch-trunk-win.patch Currently, the build process relies on saveVersion.sh to generate package-info.java with a version annotation. The sh binary may not be available on all developers' machines (e.g. Windows without Cygwin). This issue tracks removal of that dependency in Hadoop Common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8883) Anonymous fallback in KerberosAuthenticator is broken
[ https://issues.apache.org/jira/browse/HADOOP-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477204#comment-13477204 ] Hudson commented on HADOOP-8883: Integrated in Hadoop-trunk-Commit #2871 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/2871/]) HADOOP-8883. Anonymous fallback in KerberosAuthenticator is broken. (rkanter via tucu) (Revision 1398895) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1398895 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java * /hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java * /hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/TestKerberosAuthenticator.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Anonymous fallback in KerberosAuthenticator is broken - Key: HADOOP-8883 URL: https://issues.apache.org/jira/browse/HADOOP-8883 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.3-alpha Reporter: Robert Kanter Assignee: Robert Kanter Labels: security Fix For: 2.0.3-alpha Attachments: HADOOP-8883.patch HADOOP-8855 changed KerberosAuthenticator to handle when the JDK did the SPNEGO already; but this change broke using the fallback authenticator (PseudoAuthenticator) with an anonymous user (see OOZIE-1010). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8883) Anonymous fallback in KerberosAuthenticator is broken
[ https://issues.apache.org/jira/browse/HADOOP-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur updated HADOOP-8883: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks Robert. Committed to trunk and branch-2. Anonymous fallback in KerberosAuthenticator is broken - Key: HADOOP-8883 URL: https://issues.apache.org/jira/browse/HADOOP-8883 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.3-alpha Reporter: Robert Kanter Assignee: Robert Kanter Labels: security Fix For: 2.0.3-alpha Attachments: HADOOP-8883.patch HADOOP-8855 changed KerberosAuthenticator to handle when the JDK did the SPNEGO already; but this change broke using the fallback authenticator (PseudoAuthenticator) with an anonymous user (see OOZIE-1010). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477217#comment-13477217 ] Robert Joseph Evans commented on HADOOP-8922: - I don't know about the CDH versions and what they plan on pulling in. You will have to talk to cloudera about that. I will pull it into branch 2 though so it should be part of the 2.0.3 release. Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477224#comment-13477224 ] Hudson commented on HADOOP-8922: Integrated in Hadoop-trunk-Commit #2872 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/2872/]) HADOOP-8922. Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard (Damien Hardy via bobby) (Revision 1398904) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1398904 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/JMXJsonServlet.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/jmx/TestJMXJsonServlet.java Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8931) Add Java version to startup message
[ https://issues.apache.org/jira/browse/HADOOP-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HADOOP-8931: Target Version/s: 1.1.1, 2.0.3-alpha (was: 2.0.3-alpha) Affects Version/s: 1.0.0 @Steve, JAVA_HOME doesn't get printed. @Arpit, I'll merge this to branch-1 as well. @ATM, thanks for the review. Add Java version to startup message Key: HADOOP-8931 URL: https://issues.apache.org/jira/browse/HADOOP-8931 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0, 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Trivial Attachments: hadoop-8931.txt I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477232#comment-13477232 ] Robert Joseph Evans commented on HADOOP-8922: - Damien, I put the patch into trunk, but it did not merge that cleanly into branch-2. It looks like there are a number of fixes that went into trunk that have not made it into branch 2. If you could provide a patch for branch-2 that would be great. Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8914) Automate release builds
[ https://issues.apache.org/jira/browse/HADOOP-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477236#comment-13477236 ] Robert Joseph Evans commented on HADOOP-8914: - That would definitely be a good start. Part of a release is also signing and uploading the JARs to Nexus (The maven repo), so that would also have to be done manually then. Automate release builds --- Key: HADOOP-8914 URL: https://issues.apache.org/jira/browse/HADOOP-8914 Project: Hadoop Common Issue Type: Task Reporter: Eli Collins Hadoop releases are currently created manually by the RM (following http://wiki.apache.org/hadoop/HowToRelease), which means various aspects of the build are ad hoc, eg what tool chain was used to compile the java and native code varies from release to release. Other steps can be inconsistent since they're done manually eg recently the checksums for an RC were incorrect. Let's use the jenkins toolchain and create a job that automates creating release builds so that the only manual thing about releasing is publishing to mvn central. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Hardy updated HADOOP-8922: - Attachment: HADOOP-8922-4-branch-2.patch patch adapted for branch-2 Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, HADOOP-8922-4-branch-2.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477308#comment-13477308 ] Damien Hardy commented on HADOOP-8922: -- Hi Robert, Thank you for merging on trunk. Here is the patch for branch-2. Trying to keep the most part of the current code. Didn't change existing choices (like multiple jg.close() instead of finally) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, HADOOP-8922-4-branch-2.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477309#comment-13477309 ] Hadoop QA commented on HADOOP-8922: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12549374/HADOOP-8922-4-branch-2.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1635//console This message is automatically generated. Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, HADOOP-8922-4-branch-2.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477317#comment-13477317 ] Damien Hardy commented on HADOOP-8922: -- Don't know how to make Hudson apply the last patch on branch-2 for testing :/ Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Attachments: HADOOP-8922-3.patch, HADOOP-8922-4-branch-2.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477318#comment-13477318 ] Gopal V commented on HADOOP-8926: - The patches are on Hadoop 2.x (trunk) for now - will move it to branch-1.1 once it is baked in. hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Attachments: crc32-faster+readable.patch, crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7713) dfs -count -q should label output column
[ https://issues.apache.org/jira/browse/HADOOP-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Allen updated HADOOP-7713: --- Attachment: HADOOP-7713.patch Patch updated in line with review comment dfs -count -q should label output column Key: HADOOP-7713 URL: https://issues.apache.org/jira/browse/HADOOP-7713 Project: Hadoop Common Issue Type: Improvement Reporter: Nigel Daley Assignee: Jonathan Allen Priority: Trivial Labels: newbie Attachments: HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch These commands should label the output columns: {code} hadoop dfs -count dir...dir hadoop dfs -count -q dir...dir {code} Current output of the 2nd command above: {code} % hadoop dfs -count -q /user/foo /tmp none inf 9569 9493 6372553322 hdfs://nn1.bar.com/user/foo none inf 101 2689 209349812906 hdfs://nn1.bar.com/tmp {code} It is not obvious what these columns mean. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7713) dfs -count -q should label output column
[ https://issues.apache.org/jira/browse/HADOOP-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Allen updated HADOOP-7713: --- Release Note: Added -header option to fs -count command to display a header record in the report. (was: Added -h option to fs -count command to display a header record in the report.) dfs -count -q should label output column Key: HADOOP-7713 URL: https://issues.apache.org/jira/browse/HADOOP-7713 Project: Hadoop Common Issue Type: Improvement Reporter: Nigel Daley Assignee: Jonathan Allen Priority: Trivial Labels: newbie Attachments: HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch These commands should label the output columns: {code} hadoop dfs -count dir...dir hadoop dfs -count -q dir...dir {code} Current output of the 2nd command above: {code} % hadoop dfs -count -q /user/foo /tmp none inf 9569 9493 6372553322 hdfs://nn1.bar.com/user/foo none inf 101 2689 209349812906 hdfs://nn1.bar.com/tmp {code} It is not obvious what these columns mean. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8933) test-patch.sh fails erroneously on platforms that can't build native
[ https://issues.apache.org/jira/browse/HADOOP-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477331#comment-13477331 ] Chris Nauroth commented on HADOOP-8933: --- Thanks, Colin. I missed HADOOP-8776. I'll close this out as duplicate and participate on the original. test-patch.sh fails erroneously on platforms that can't build native Key: HADOOP-8933 URL: https://issues.apache.org/jira/browse/HADOOP-8933 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HADOOP-8933.patch If a developer is working on a platform that can't build native (like OS X right now), then test-patch.sh will report the patch as a failure due to The patch appears to cause the build to fail. This is incorrect, because the developer's patch didn't cause the build to fail. Adding an extra optional flag to test-patch.sh would help developers on these platforms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8933) test-patch.sh fails erroneously on platforms that can't build native
[ https://issues.apache.org/jira/browse/HADOOP-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-8933: -- Resolution: Duplicate Status: Resolved (was: Patch Available) test-patch.sh fails erroneously on platforms that can't build native Key: HADOOP-8933 URL: https://issues.apache.org/jira/browse/HADOOP-8933 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HADOOP-8933.patch If a developer is working on a platform that can't build native (like OS X right now), then test-patch.sh will report the patch as a failure due to The patch appears to cause the build to fail. This is incorrect, because the developer's patch didn't cause the build to fail. Adding an extra optional flag to test-patch.sh would help developers on these platforms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-6496) HttpServer sends wrong content-type for CSS files (and others)
[ https://issues.apache.org/jira/browse/HADOOP-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic updated HADOOP-6496: --- Attachment: HADOOP-6496.branch-1.1.backport.2.patch Attaching the patch for branch-1.1. HttpServer sends wrong content-type for CSS files (and others) -- Key: HADOOP-6496 URL: https://issues.apache.org/jira/browse/HADOOP-6496 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.21.0, 0.22.0 Reporter: Lars Francke Assignee: Ivan Mitic Priority: Minor Fix For: 0.22.0 Attachments: HADOOP-6496.branch-1.1.backport.2.patch, HADOOP-6496.branch-1.1.backport.patch, hadoop-6496.txt, hadoop-6496.txt CSS files are send as text/html causing problems if the HTML page is rendered in standards mode. The HDFS interface for example still works because it is rendered in quirks mode, the HBase interface doesn't work because it is rendered in standards mode. See HBASE-2110 for more details. I've had a quick look at HttpServer but I'm too unfamiliar with it to see the problem. I think this started happening with HADOOP-6441 which would lead me to believe that the filter is called for every request and not only *.jsp and *.html. I'd consider this a bug but I don't know enough about this to provide a fix. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-6496) HttpServer sends wrong content-type for CSS files (and others)
[ https://issues.apache.org/jira/browse/HADOOP-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic reopened HADOOP-6496: Reopening for branch 1.1 backport. HttpServer sends wrong content-type for CSS files (and others) -- Key: HADOOP-6496 URL: https://issues.apache.org/jira/browse/HADOOP-6496 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.21.0, 0.22.0 Reporter: Lars Francke Assignee: Ivan Mitic Priority: Minor Fix For: 0.22.0 Attachments: HADOOP-6496.branch-1.1.backport.2.patch, HADOOP-6496.branch-1.1.backport.patch, hadoop-6496.txt, hadoop-6496.txt CSS files are send as text/html causing problems if the HTML page is rendered in standards mode. The HDFS interface for example still works because it is rendered in quirks mode, the HBase interface doesn't work because it is rendered in standards mode. See HBASE-2110 for more details. I've had a quick look at HttpServer but I'm too unfamiliar with it to see the problem. I think this started happening with HADOOP-6441 which would lead me to believe that the filter is called for every request and not only *.jsp and *.html. I'd consider this a bug but I don't know enough about this to provide a fix. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code
[ https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477337#comment-13477337 ] Chris Nauroth commented on HADOOP-8776: --- I'm interested in getting a solution to this, as I just accidentally repeated this work on HADOOP-8933 (now closed as duplicate). Let me know if it's helpful for me to work on this patch, code review, test, or anything else. Thanks, --Chris Provide an option in test-patch that can enable / disable compiling native code --- Key: HADOOP-8776 URL: https://issues.apache.org/jira/browse/HADOOP-8776 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 3.0.0 Reporter: Hemanth Yamijala Assignee: Hemanth Yamijala Priority: Minor Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch The test-patch script in Hadoop source runs a native compile with the patch. On platforms like MAC, there are issues with the native compile that make it difficult to use test-patch. This JIRA is to try and provide an option to make the native compilation optional. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-6311) Add support for unix domain sockets to JNI libs
[ https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477345#comment-13477345 ] Todd Lipcon commented on HADOOP-6311: - Hi Colin, Thanks for writing up the design doc. I think it probably should actually go on HDFS-347, which is the overall feature JIRA, rather than this one, which is about one of the implementation subtasks. But, anyway, here are some comments: {quote} * Portable. Hadoop supports multiple operating systems and environments, including Linux, Solaris, and Windows. {quote} IMO, there is not a requirement that performance enhancements work on all systems. It is up to the maintainers of each port to come up with the most efficient way to do things. Though there is an active effort to get Hadoop working on Windows, it is not yet a requirement. So long as we maintain the current TCP-based read (which we have to, anyway, for remote access), we'll have portability. If the Windows port doesn't initially offer this feature, that seems acceptable to me, as they could later add whatever mechanism makes the most sense for them. {quote} * High performance. If performance is compromised, there is no point to any of this work, because clients could simply use the existing, non-short-circuit write pathways to access data. {quote} Should clarify that the performance of the mechanism by which FDs are _passed_ is less important, since the client will cache the open FDs and just re-use them for subsequent random reads against the same file (the primary use case for this improvement). So long as the overhead of passing the FDs isn't huge, we should be OK. {quote} There are other problems. How would the datanode clients and the server decide on a socket path? If it asks every time prior to connecting, that could be slow. If the DFSClient cached this socket path, how long should it cache it before expiring the cache? What happens if the administrator does not properly set up the socket path, as discussed earlier? What happens if the administrator wants to put multiple DataNodes on the same node? {quote} Per above, slowness here is not a concern, since we only need to do the socket-passing on file open. HDFS applications generally open a file once and then perform many many reads against the same block before opening the next block. As for how the socket path is communicated, why not do it via an RPC? For example, in your solution #3, we're using an RPC to communicate a cookie. Instead of that, it can just return its abstract namespace socket name. (You seem to propose this under solution #3 below, but here in solution #1 reject it) Another option would be to add a new field to the DatanodeId/DatanodeRegistration: when the client gets block locations it could also include the socket paths. {quote} The response is not a path, but a 64-bit cookie. The DFSClient then connects to the DN via a UNIX domain socket, and presents the cookie. In response, he receives the file descriptor. {quote} I don't see the purpose of the cookie, still, since it adds yet another opaque token, and requires the DN code to publish the file descriptor with a cookie, and we end up with extra data structures, cached open files, cache expiration policies, etc. {quote} Choice #3. Blocking FdServer versus non-blocking FdServer. Non-blocking servers in C are somewhat more complex than blocking servers. However, if I used a blocking server, there would be no obvious way to determine how many threads it should use. Because it depends on how busy the server is expected to be, only the system administrator can know ahead of time. Additionally, many schedulers do not deal well with a large number of threads, especially on older versions of Linux and commercial UNIX variants. Coincidentally, these happen to be the exactly kind of systems many of our users run. {quote} I don't really buy this. The socket only needs to be active long enough to pass a single fd, which should take a few milliseconds. The number of requests for fd-passing is based on the number of block opens, _not_ the number of reads. So a small handful of threads should be able to handle even significant workloads just fine. We also do fine with threads on the data xceiver path, often configured into the hundreds or thousands. {quote} Another problem with blocking servers is that shutting them down can be difficult. Since there is no time limit on blocking I/O, a message sent to the server to terminate may take a while, or possibly forever, to be acted on. This may seem like a trivial or unimportant problem, but it is a very real one in unit tests. Socket receive and send timeouts can reduce the extra time needed to shut down, but never quite eliminate it. {quote} Again I don't buy it, we do fine with blocking IO everywhere else.. Why is this context different? *Wire protocol* The wire
[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code
[ https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477351#comment-13477351 ] Jianbin Wei commented on HADOOP-8776: - I agree this needs to be done and merged into trunk. Another developer's time was wasted. :-) Provide an option in test-patch that can enable / disable compiling native code --- Key: HADOOP-8776 URL: https://issues.apache.org/jira/browse/HADOOP-8776 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 3.0.0 Reporter: Hemanth Yamijala Assignee: Hemanth Yamijala Priority: Minor Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch The test-patch script in Hadoop source runs a native compile with the patch. On platforms like MAC, there are issues with the native compile that make it difficult to use test-patch. This JIRA is to try and provide an option to make the native compilation optional. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7713) dfs -count -q should label output column
[ https://issues.apache.org/jira/browse/HADOOP-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477353#comment-13477353 ] Hadoop QA commented on HADOOP-7713: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12549381/HADOOP-7713.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1636//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1636//console This message is automatically generated. dfs -count -q should label output column Key: HADOOP-7713 URL: https://issues.apache.org/jira/browse/HADOOP-7713 Project: Hadoop Common Issue Type: Improvement Reporter: Nigel Daley Assignee: Jonathan Allen Priority: Trivial Labels: newbie Attachments: HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch, HADOOP-7713.patch These commands should label the output columns: {code} hadoop dfs -count dir...dir hadoop dfs -count -q dir...dir {code} Current output of the 2nd command above: {code} % hadoop dfs -count -q /user/foo /tmp none inf 9569 9493 6372553322 hdfs://nn1.bar.com/user/foo none inf 101 2689 209349812906 hdfs://nn1.bar.com/tmp {code} It is not obvious what these columns mean. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8931) Add Java version to startup message
[ https://issues.apache.org/jira/browse/HADOOP-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477370#comment-13477370 ] Hudson commented on HADOOP-8931: Integrated in Hadoop-trunk-Commit #2874 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/2874/]) HADOOP-8931. Add Java version to startup message. Contributed by Eli Collins (Revision 1398998) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1398998 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java Add Java version to startup message Key: HADOOP-8931 URL: https://issues.apache.org/jira/browse/HADOOP-8931 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0, 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Trivial Attachments: hadoop-8931.txt I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8930) Cumulative code coverage calculation
[ https://issues.apache.org/jira/browse/HADOOP-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Klochkov updated HADOOP-8930: Attachment: HADOOP-8930.patch Attaching a patch for 2.x/trunk Cumulative code coverage calculation Key: HADOOP-8930 URL: https://issues.apache.org/jira/browse/HADOOP-8930 Project: Hadoop Common Issue Type: Improvement Components: test Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Andrey Klochkov Assignee: Andrey Klochkov Attachments: HADOOP-8930.patch When analyzing code coverage in Hadoop Core, we noticed that some coverage gaps are caused by the way the coverage calculation is done currently. More specifically, right now coverage can not be calculated for the whole Core at once, but can only be calculated separately for top level modules like common-project, hadoop-hdfs-project etc. At the same time, some code in particular modules is tested by tests in other modules of Core. For example, org.apache.hadoop.fs from hadoop-common-project/hadoop-common is not covered there but it's covered by tests under hadoop-hdfs-project. To enable calculation of cumulative code coverage it's needed to move Clover profile definition up one level, from hadoop-project/pom.xml to the top level pom.xml (hadoop-main). Patch both for 0.23 and 2.x will be attached shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477377#comment-13477377 ] Robert Joseph Evans commented on HADOOP-8922: - I ran the tests myself that the new patch looks good. thanks for the quick turn around time. Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Fix For: 3.0.0, 2.0.3-alpha Attachments: HADOOP-8922-3.patch, HADOOP-8922-4-branch-2.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8922) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard
[ https://issues.apache.org/jira/browse/HADOOP-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans updated HADOOP-8922: Resolution: Fixed Fix Version/s: 2.0.3-alpha 3.0.0 Status: Resolved (was: Patch Available) Provide alternate JSONP output for JMXJsonServlet to allow javascript in browser dashboard -- Key: HADOOP-8922 URL: https://issues.apache.org/jira/browse/HADOOP-8922 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 2.0.0-alpha Reporter: Damien Hardy Priority: Trivial Labels: newbie, patch Fix For: 3.0.0, 2.0.3-alpha Attachments: HADOOP-8922-3.patch, HADOOP-8922-4-branch-2.patch, test.html JMXJsonServlet may provide a JSONP alternative to JSON to allow javascript in browser GUI to make requests. For security purpose about XSS, browser limit request on other domain[¹|#ref1] so that metrics from cluster nodes cannot be used in a full js interface. An example of this kind of dashboard is the bigdesk[²|#ref2] plugin for ElasticSearch. In order to achieve that the servlet should detect a GET parameter (callback=) and modify the response by surrounding the Json value with ( and ); [³|#ref3] value is variable and should be provide by client as callback parameter value. {anchor:ref1}[1] https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript {anchor:ref2}[2] https://github.com/lukas-vlcek/bigdesk {anchor:ref3}[3] http://en.wikipedia.org/wiki/JSONP -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8930) Cumulative code coverage calculation
[ https://issues.apache.org/jira/browse/HADOOP-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Klochkov updated HADOOP-8930: Status: Patch Available (was: Open) Cumulative code coverage calculation Key: HADOOP-8930 URL: https://issues.apache.org/jira/browse/HADOOP-8930 Project: Hadoop Common Issue Type: Improvement Components: test Affects Versions: 2.0.2-alpha, 0.23.3 Reporter: Andrey Klochkov Assignee: Andrey Klochkov Attachments: HADOOP-8930.patch When analyzing code coverage in Hadoop Core, we noticed that some coverage gaps are caused by the way the coverage calculation is done currently. More specifically, right now coverage can not be calculated for the whole Core at once, but can only be calculated separately for top level modules like common-project, hadoop-hdfs-project etc. At the same time, some code in particular modules is tested by tests in other modules of Core. For example, org.apache.hadoop.fs from hadoop-common-project/hadoop-common is not covered there but it's covered by tests under hadoop-hdfs-project. To enable calculation of cumulative code coverage it's needed to move Clover profile definition up one level, from hadoop-project/pom.xml to the top level pom.xml (hadoop-main). Patch both for 0.23 and 2.x will be attached shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans updated HADOOP-8926: Resolution: Fixed Fix Version/s: 2.0.3-alpha 3.0.0 Status: Resolved (was: Patch Available) Thanks Gopal. I put this into trunk and branch-2. hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Fix For: 3.0.0, 2.0.3-alpha Attachments: crc32-faster+readable.patch, crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8931) Add Java version to startup message
[ https://issues.apache.org/jira/browse/HADOOP-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HADOOP-8931: Attachment: hadoop-8931-b1.txt This is the branch-1 patch btw, same change applies the file just lives in a different directory. Add Java version to startup message Key: HADOOP-8931 URL: https://issues.apache.org/jira/browse/HADOOP-8931 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0, 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Trivial Fix For: 1.2.0, 2.0.3-alpha Attachments: hadoop-8931-b1.txt, hadoop-8931.txt I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8931) Add Java version to startup message
[ https://issues.apache.org/jira/browse/HADOOP-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HADOOP-8931: Resolution: Fixed Fix Version/s: 2.0.3-alpha 1.2.0 Target Version/s: (was: 1.1.1, 2.0.3-alpha) Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I've committed this and merged to branch-1 and branch-2. Add Java version to startup message Key: HADOOP-8931 URL: https://issues.apache.org/jira/browse/HADOOP-8931 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0, 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Trivial Fix For: 1.2.0, 2.0.3-alpha Attachments: hadoop-8931-b1.txt, hadoop-8931.txt I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8930) Cumulative code coverage calculation
[ https://issues.apache.org/jira/browse/HADOOP-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477390#comment-13477390 ] Hadoop QA commented on HADOOP-8930: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12549388/HADOOP-8930.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1637//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1637//console This message is automatically generated. Cumulative code coverage calculation Key: HADOOP-8930 URL: https://issues.apache.org/jira/browse/HADOOP-8930 Project: Hadoop Common Issue Type: Improvement Components: test Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Andrey Klochkov Assignee: Andrey Klochkov Attachments: HADOOP-8930.patch When analyzing code coverage in Hadoop Core, we noticed that some coverage gaps are caused by the way the coverage calculation is done currently. More specifically, right now coverage can not be calculated for the whole Core at once, but can only be calculated separately for top level modules like common-project, hadoop-hdfs-project etc. At the same time, some code in particular modules is tested by tests in other modules of Core. For example, org.apache.hadoop.fs from hadoop-common-project/hadoop-common is not covered there but it's covered by tests under hadoop-hdfs-project. To enable calculation of cumulative code coverage it's needed to move Clover profile definition up one level, from hadoop-project/pom.xml to the top level pom.xml (hadoop-main). Patch both for 0.23 and 2.x will be attached shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8926) hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data
[ https://issues.apache.org/jira/browse/HADOOP-8926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477393#comment-13477393 ] Hudson commented on HADOOP-8926: Integrated in Hadoop-trunk-Commit #2875 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/2875/]) HADOOP-8926. hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data (Gopal V via bobby) (Revision 1399005) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399005 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/PureJavaCrc32.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/PureJavaCrc32C.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestPureJavaCrc32.java hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data Key: HADOOP-8926 URL: https://issues.apache.org/jira/browse/HADOOP-8926 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.0.3-alpha Environment: Ubuntu 10.10 i386 Reporter: Gopal V Assignee: Gopal V Priority: Trivial Labels: optimization Fix For: 3.0.0, 2.0.3-alpha Attachments: crc32-faster+readable.patch, crc32-faster+test.patch, pure-crc32-cache-hit.patch While running microbenchmarks for HDFS write codepath, a significant part of the CPU fraction was consumed by the DataChecksum.update(). The attached patch converts the static arrays in CRC32 into a single linear array for a performance boost in the inner loop. milli-seconds for 1Gig (16400 loop over a 64kb chunk) || platform || original || cache-aware || improvement || | x86 | 3894 | 2304 | 40.83 | | x86_64 | 2131 | 1826 | 14 | The performance improvement on x86 is rather larger than the 64bit case, due to the extra register/stack pressure caused by the static arrays. A closer analysis of the PureJavaCrc32 JIT code shows the following assembly fragment {code} 0x40f1e345: mov$0x184,%ecx 0x40f1e34a: mov0x4415b560(%ecx),%ecx ;*getstatic T8_5 ; - PureJavaCrc32::update@95 (line 61) ; {oop('PureJavaCrc32')} 0x40f1e350: mov%ecx,0x2c(%esp) {code} Basically, the static variables T8_0 through to T8_7 are being spilled to the stack because of register pressure. The x86_64 case has a lower likelihood of such pessimistic JIT code due to the increased number of registers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8934) Shell command ls should include sort options
Jonathan Allen created HADOOP-8934: -- Summary: Shell command ls should include sort options Key: HADOOP-8934 URL: https://issues.apache.org/jira/browse/HADOOP-8934 Project: Hadoop Common Issue Type: Improvement Components: fs Reporter: Jonathan Allen Assignee: Jonathan Allen Priority: Minor The shell command ls should include options to sort the output similar to the unix ls command. The following options seem appropriate: -t : sort by modification time -S : sort by file size -r : reverse the sort order -u : use access time rather than modification time for sort and display -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8930) Cumulative code coverage calculation
[ https://issues.apache.org/jira/browse/HADOOP-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Klochkov updated HADOOP-8930: Attachment: HADOOP-8930-branch-0.23.patch Attaching a patch for 0.23 Cumulative code coverage calculation Key: HADOOP-8930 URL: https://issues.apache.org/jira/browse/HADOOP-8930 Project: Hadoop Common Issue Type: Improvement Components: test Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Andrey Klochkov Assignee: Andrey Klochkov Attachments: HADOOP-8930-branch-0.23.patch, HADOOP-8930.patch When analyzing code coverage in Hadoop Core, we noticed that some coverage gaps are caused by the way the coverage calculation is done currently. More specifically, right now coverage can not be calculated for the whole Core at once, but can only be calculated separately for top level modules like common-project, hadoop-hdfs-project etc. At the same time, some code in particular modules is tested by tests in other modules of Core. For example, org.apache.hadoop.fs from hadoop-common-project/hadoop-common is not covered there but it's covered by tests under hadoop-hdfs-project. To enable calculation of cumulative code coverage it's needed to move Clover profile definition up one level, from hadoop-project/pom.xml to the top level pom.xml (hadoop-main). Patch both for 0.23 and 2.x will be attached shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8930) Cumulative code coverage calculation
[ https://issues.apache.org/jira/browse/HADOOP-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477408#comment-13477408 ] Hadoop QA commented on HADOOP-8930: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12549394/HADOOP-8930-branch-0.23.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1638//console This message is automatically generated. Cumulative code coverage calculation Key: HADOOP-8930 URL: https://issues.apache.org/jira/browse/HADOOP-8930 Project: Hadoop Common Issue Type: Improvement Components: test Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Andrey Klochkov Assignee: Andrey Klochkov Attachments: HADOOP-8930-branch-0.23.patch, HADOOP-8930.patch When analyzing code coverage in Hadoop Core, we noticed that some coverage gaps are caused by the way the coverage calculation is done currently. More specifically, right now coverage can not be calculated for the whole Core at once, but can only be calculated separately for top level modules like common-project, hadoop-hdfs-project etc. At the same time, some code in particular modules is tested by tests in other modules of Core. For example, org.apache.hadoop.fs from hadoop-common-project/hadoop-common is not covered there but it's covered by tests under hadoop-hdfs-project. To enable calculation of cumulative code coverage it's needed to move Clover profile definition up one level, from hadoop-project/pom.xml to the top level pom.xml (hadoop-main). Patch both for 0.23 and 2.x will be attached shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8930) Cumulative code coverage calculation
[ https://issues.apache.org/jira/browse/HADOOP-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477412#comment-13477412 ] Andrey Klochkov commented on HADOOP-8930: - Now new tests are needed as the patch updates build configuration only. Also seems the robot doesn't apply non-trunk patches to a proper branch. Cumulative code coverage calculation Key: HADOOP-8930 URL: https://issues.apache.org/jira/browse/HADOOP-8930 Project: Hadoop Common Issue Type: Improvement Components: test Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Andrey Klochkov Assignee: Andrey Klochkov Attachments: HADOOP-8930-branch-0.23.patch, HADOOP-8930.patch When analyzing code coverage in Hadoop Core, we noticed that some coverage gaps are caused by the way the coverage calculation is done currently. More specifically, right now coverage can not be calculated for the whole Core at once, but can only be calculated separately for top level modules like common-project, hadoop-hdfs-project etc. At the same time, some code in particular modules is tested by tests in other modules of Core. For example, org.apache.hadoop.fs from hadoop-common-project/hadoop-common is not covered there but it's covered by tests under hadoop-hdfs-project. To enable calculation of cumulative code coverage it's needed to move Clover profile definition up one level, from hadoop-project/pom.xml to the top level pom.xml (hadoop-main). Patch both for 0.23 and 2.x will be attached shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8930) Cumulative code coverage calculation
[ https://issues.apache.org/jira/browse/HADOOP-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Klochkov updated HADOOP-8930: Target Version/s: 2.0.3-alpha, 0.23.5 (was: 0.23.2, 2.0.3-alpha) Cumulative code coverage calculation Key: HADOOP-8930 URL: https://issues.apache.org/jira/browse/HADOOP-8930 Project: Hadoop Common Issue Type: Improvement Components: test Affects Versions: 0.23.3, 2.0.2-alpha Reporter: Andrey Klochkov Assignee: Andrey Klochkov Attachments: HADOOP-8930-branch-0.23.patch, HADOOP-8930.patch When analyzing code coverage in Hadoop Core, we noticed that some coverage gaps are caused by the way the coverage calculation is done currently. More specifically, right now coverage can not be calculated for the whole Core at once, but can only be calculated separately for top level modules like common-project, hadoop-hdfs-project etc. At the same time, some code in particular modules is tested by tests in other modules of Core. For example, org.apache.hadoop.fs from hadoop-common-project/hadoop-common is not covered there but it's covered by tests under hadoop-hdfs-project. To enable calculation of cumulative code coverage it's needed to move Clover profile definition up one level, from hadoop-project/pom.xml to the top level pom.xml (hadoop-main). Patch both for 0.23 and 2.x will be attached shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-6311) Add support for unix domain sockets to JNI libs
[ https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477419#comment-13477419 ] Colin Patrick McCabe commented on HADOOP-6311: -- Thanks for the comments. With respect to security: there is always a possibility for a client to open a socket with the same name as the server would have used. This is similar to the problem with TCP/IP sockets of a malicious program grabbing the port before the DataNode could get it (or after the DataNode has died.) I guess this is a problem that actually is worse with the abstract socket namespace. With path-based sockets, you can set up the path so that the permissions of the path itself prevent this attack. However, with the abstract socket namespace, there's no way to prevent another process from grabbing the port first. I agree that there are downsides to the short-circuit approach. I was very careful to maintain the ability for the server to decline to offer short-circuit local reads in my patch set. This is obviously important for our future flexibility. It might be advisable to allow this on a file-by-file basis as well. I don't think that on-disk format changes are that big of a deal for the short-circuit pathway. We tell old clients they can't use short-circuit reads on those files, and fix new clients to understand the new format. We should definitely have a way for short-circuit clients to report statistics, disk errors, etc. to the DataNode. However, let's not gate this change on features like that. They can easily be added as features later and aren't really related to the core issue of fixing local reads + security. I think I'll open a separate JIRA for that. TCP optimizations are pretty cool, but not when you run on RHEL6, as many folks do :) Maybe we should open a separate JIRA to investigate things like TCP fast open, changing TCP kernel options, etc. might be used with Hadoop in the future. There are also certain performance improvements we could do in the read and write paths on the DataNode, but again, that's out of scope for this JIRA, I think. Add support for unix domain sockets to JNI libs --- Key: HADOOP-6311 URL: https://issues.apache.org/jira/browse/HADOOP-6311 Project: Hadoop Common Issue Type: New Feature Components: native Affects Versions: 0.20.0 Reporter: Todd Lipcon Assignee: Colin Patrick McCabe Attachments: 6311-trunk-inprogress.txt, design.txt, HADOOP-6311.014.patch, HADOOP-6311.016.patch, HADOOP-6311.018.patch, HADOOP-6311.020b.patch, HADOOP-6311.020.patch, HADOOP-6311.021.patch, HADOOP-6311.022.patch, HADOOP-6311-0.patch, HADOOP-6311-1.patch, hadoop-6311.txt For HDFS-347 we need to use unix domain sockets. This JIRA is to include a library in common which adds a o.a.h.net.unix package based on the code from Android (apache 2 license) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8567) Backport conf servlet with dump running configuration to branch 1.x
[ https://issues.apache.org/jira/browse/HADOOP-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HADOOP-8567: -- Attachment: Hadoop.8567.branch-1.002.patch Backport conf servlet with dump running configuration to branch 1.x --- Key: HADOOP-8567 URL: https://issues.apache.org/jira/browse/HADOOP-8567 Project: Hadoop Common Issue Type: New Feature Components: conf Affects Versions: 1.0.0 Reporter: Junping Du Assignee: Jing Zhao Attachments: Hadoop.8567.branch-1.001.patch, Hadoop.8567.branch-1.002.patch HADOOP-6408 provide conf servlet that can dump running configuration which great helps admin to trouble shooting the configuration issue. However, that patch works on branch after 0.21 only and should be backport to branch 1.x. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8888) add the ability to suppress the deprecated warnings when using hadoop cli
[ https://issues.apache.org/jira/browse/HADOOP-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477457#comment-13477457 ] Arpit Gupta commented on HADOOP-: - I am going to close this as wont fix. We seem to be throwing deprecated warnings in our shell scripts as well as java code (Commands.java). To do this for both i will have to add another config in common configuration to suppress these warnings. I dont feel that is some thing we should do for this. If people feel they would like this i can certainly generate the patch. add the ability to suppress the deprecated warnings when using hadoop cli - Key: HADOOP- URL: https://issues.apache.org/jira/browse/HADOOP- Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Arpit Gupta Assignee: Arpit Gupta some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1 May be we can introduce HADOOP_DEPRECATED_WARN_SUPPRESS which if set to yes will suppress the various warnings that are thrown. For example commands like {code} hadoop dfs hadoop jar {code} etc will print out deprecated warnings. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8888) add the ability to suppress the deprecated warnings when using hadoop cli
[ https://issues.apache.org/jira/browse/HADOOP-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Gupta resolved HADOOP-. - Resolution: Won't Fix add the ability to suppress the deprecated warnings when using hadoop cli - Key: HADOOP- URL: https://issues.apache.org/jira/browse/HADOOP- Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Arpit Gupta Assignee: Arpit Gupta some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1 May be we can introduce HADOOP_DEPRECATED_WARN_SUPPRESS which if set to yes will suppress the various warnings that are thrown. For example commands like {code} hadoop dfs hadoop jar {code} etc will print out deprecated warnings. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-6311) Add support for unix domain sockets to JNI libs
[ https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477467#comment-13477467 ] Todd Lipcon commented on HADOOP-6311: - bq. With respect to security: there is always a possibility for a client to open a socket with the same name as the server would have used. This is similar to the problem with TCP/IP sockets of a malicious program grabbing the port before the DataNode could get it (or after the DataNode has died.) That's why secure clusters use low (privileged) ports for the data transfer protocol. bq. I don't think that on-disk format changes are that big of a deal for the short-circuit pathway. We tell old clients they can't use short-circuit reads on those files, and fix new clients to understand the new format. Agreed, just need to make sure the deny pathway works and ideally some kind of version number exposed. bq. TCP optimizations are pretty cool, but not when you run on RHEL6, as many folks do Maybe we should open a separate JIRA to investigate things like TCP fast open, changing TCP kernel options, etc. might be used with Hadoop in the future. There are also certain performance improvements we could do in the read and write paths on the DataNode, but again, that's out of scope for this JIRA, I think. Agreed, but my question is more this: let's assume that unix sockets for the data path are 3x as fast as local TCP sockets. If that's the case, then do we still get a big benefit from short-circuit? I think the answer is probably yes for random read, but no for sequential. The point about trying the tcp friends in future versions is just one potential way of evaluating this without having to write all the code for a unix socket data path. If tcp friends is comparable to short circuit, then unix sockets would probably also be comparable. Add support for unix domain sockets to JNI libs --- Key: HADOOP-6311 URL: https://issues.apache.org/jira/browse/HADOOP-6311 Project: Hadoop Common Issue Type: New Feature Components: native Affects Versions: 0.20.0 Reporter: Todd Lipcon Assignee: Colin Patrick McCabe Attachments: 6311-trunk-inprogress.txt, design.txt, HADOOP-6311.014.patch, HADOOP-6311.016.patch, HADOOP-6311.018.patch, HADOOP-6311.020b.patch, HADOOP-6311.020.patch, HADOOP-6311.021.patch, HADOOP-6311.022.patch, HADOOP-6311-0.patch, HADOOP-6311-1.patch, hadoop-6311.txt For HDFS-347 we need to use unix domain sockets. This JIRA is to include a library in common which adds a o.a.h.net.unix package based on the code from Android (apache 2 license) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-6311) Add support for unix domain sockets to JNI libs
[ https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477471#comment-13477471 ] Colin Patrick McCabe commented on HADOOP-6311: -- With regard to security: after further reflection, I think we will need to ask system administrators establish secure directories to hold the UNIX domain sockets. In practice, this means a directory owned by hdfs, where neither it nor any of its parent directories are vulnerable to attack. bq. ... my question is more this: let's assume that unix sockets for the data path are 3x as fast as local TCP sockets. If that's the case, then do we still get a big benefit from short-circuit? Oh, I misinterpreted. You were talking about using UNIX domain instead of TCP for data traffic. Yeah, it could be interesting. It's a time-honored way to get better performance on UNIXes. I'll do some tests if I can get the UNIX domain sockets to implement the standard interface (and if the resulting combination actually works.) I think there is a good chance that it will... Add support for unix domain sockets to JNI libs --- Key: HADOOP-6311 URL: https://issues.apache.org/jira/browse/HADOOP-6311 Project: Hadoop Common Issue Type: New Feature Components: native Affects Versions: 0.20.0 Reporter: Todd Lipcon Assignee: Colin Patrick McCabe Attachments: 6311-trunk-inprogress.txt, design.txt, HADOOP-6311.014.patch, HADOOP-6311.016.patch, HADOOP-6311.018.patch, HADOOP-6311.020b.patch, HADOOP-6311.020.patch, HADOOP-6311.021.patch, HADOOP-6311.022.patch, HADOOP-6311-0.patch, HADOOP-6311-1.patch, hadoop-6311.txt For HDFS-347 we need to use unix domain sockets. This JIRA is to include a library in common which adds a o.a.h.net.unix package based on the code from Android (apache 2 license) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8779) Use tokens regardless of authentication type
[ https://issues.apache.org/jira/browse/HADOOP-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477497#comment-13477497 ] Kan Zhang commented on HADOOP-8779: --- I agree the use of tokens for subsequent authentication (referred to as internal auth in previous discussions, but maybe subsequent auth is a better name?) shouldn't be limited to Kerberos authenticated initial connections (referred to as external auth in previous discussions, but maybe initial auth is better name?). However, IMHO, we should give users the option not to use tokens for subsequent authentication, as is the case when security is turned off today. See HDFS-4056 for more discussion. https://issues.apache.org/jira/browse/HDFS-4056?focusedCommentId=13477142page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13477142 Use tokens regardless of authentication type Key: HADOOP-8779 URL: https://issues.apache.org/jira/browse/HADOOP-8779 Project: Hadoop Common Issue Type: New Feature Components: fs, security Affects Versions: 3.0.0, 2.0.2-alpha Reporter: Daryn Sharp Assignee: Daryn Sharp Security is a combination of authentication and authorization (tokens). Authorization may be granted independently of the authentication model. Tokens should be used regardless of simple or kerberos authentication. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8935) Make 'winutils ls' show the SID if the owner does not exist on the system
Chuan Liu created HADOOP-8935: - Summary: Make 'winutils ls' show the SID if the owner does not exist on the system Key: HADOOP-8935 URL: https://issues.apache.org/jira/browse/HADOOP-8935 Project: Hadoop Common Issue Type: Bug Reporter: Chuan Liu Priority: Minor Right now, 'winutils ls' will fail if the file belongs to a user SID that does not exist on the system. E.g. the user is deleted. Previously, this is only a hypothesis scenario. However, we have seen some failures in the Azure deployment where the OS is re-imaged, and renders the old SID invalid. [~jgordon] proposed to display the SID itself in the invalid SID case similar to the situation on Linux. This JIRA is created to track this proposal. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HADOOP-8935) Make 'winutils ls' show the SID if the owner does not exist on the system
[ https://issues.apache.org/jira/browse/HADOOP-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chuan Liu reassigned HADOOP-8935: - Assignee: Chuan Liu Make 'winutils ls' show the SID if the owner does not exist on the system - Key: HADOOP-8935 URL: https://issues.apache.org/jira/browse/HADOOP-8935 Project: Hadoop Common Issue Type: Bug Reporter: Chuan Liu Assignee: Chuan Liu Priority: Minor Right now, 'winutils ls' will fail if the file belongs to a user SID that does not exist on the system. E.g. the user is deleted. Previously, this is only a hypothesis scenario. However, we have seen some failures in the Azure deployment where the OS is re-imaged, and renders the old SID invalid. [~jgordon] proposed to display the SID itself in the invalid SID case similar to the situation on Linux. This JIRA is created to track this proposal. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8935) Make 'winutils ls' show the SID if the owner does not exist on the system
[ https://issues.apache.org/jira/browse/HADOOP-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chuan Liu updated HADOOP-8935: -- Attachment: HADOOP-8935-branch-1-win.patch Make 'winutils ls' show the SID if the owner does not exist on the system - Key: HADOOP-8935 URL: https://issues.apache.org/jira/browse/HADOOP-8935 Project: Hadoop Common Issue Type: Bug Reporter: Chuan Liu Assignee: Chuan Liu Priority: Minor Attachments: HADOOP-8935-branch-1-win.patch Right now, 'winutils ls' will fail if the file belongs to a user SID that does not exist on the system. E.g. the user is deleted. Previously, this is only a hypothesis scenario. However, we have seen some failures in the Azure deployment where the OS is re-imaged, and renders the old SID invalid. [~jgordon] proposed to display the SID itself in the invalid SID case similar to the situation on Linux. This JIRA is created to track this proposal. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8935) Make 'winutils ls' show the SID if the owner does not exist on the system
[ https://issues.apache.org/jira/browse/HADOOP-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chuan Liu updated HADOOP-8935: -- Affects Version/s: 1-win Make 'winutils ls' show the SID if the owner does not exist on the system - Key: HADOOP-8935 URL: https://issues.apache.org/jira/browse/HADOOP-8935 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Chuan Liu Assignee: Chuan Liu Priority: Minor Fix For: 1-win Attachments: HADOOP-8935-branch-1-win.patch Right now, 'winutils ls' will fail if the file belongs to a user SID that does not exist on the system. E.g. the user is deleted. Previously, this is only a hypothesis scenario. However, we have seen some failures in the Azure deployment where the OS is re-imaged, and renders the old SID invalid. [~jgordon] proposed to display the SID itself in the invalid SID case similar to the situation on Linux. This JIRA is created to track this proposal. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8935) Make 'winutils ls' show the SID if the owner does not exist on the system
[ https://issues.apache.org/jira/browse/HADOOP-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chuan Liu updated HADOOP-8935: -- Fix Version/s: 1-win Make 'winutils ls' show the SID if the owner does not exist on the system - Key: HADOOP-8935 URL: https://issues.apache.org/jira/browse/HADOOP-8935 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Chuan Liu Assignee: Chuan Liu Priority: Minor Fix For: 1-win Attachments: HADOOP-8935-branch-1-win.patch Right now, 'winutils ls' will fail if the file belongs to a user SID that does not exist on the system. E.g. the user is deleted. Previously, this is only a hypothesis scenario. However, we have seen some failures in the Azure deployment where the OS is re-imaged, and renders the old SID invalid. [~jgordon] proposed to display the SID itself in the invalid SID case similar to the situation on Linux. This JIRA is created to track this proposal. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8936) Local users should be able to query a domain user's groups on Windows
Chuan Liu created HADOOP-8936: - Summary: Local users should be able to query a domain user's groups on Windows Key: HADOOP-8936 URL: https://issues.apache.org/jira/browse/HADOOP-8936 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Chuan Liu Assignee: Chuan Liu Priority: Minor Fix For: 1-win When Hadoop run by a local user, and a domain user submit a job, Hadoop will need to get the local groups for the domain user. This fails in 'winutils' now because we tried to query domain controller for domain users and local users does not have the permission to do so. We should fix the problem so that local users should be able to query a domain user's local groups. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8936) Local users should be able to query a domain user's groups on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chuan Liu updated HADOOP-8936: -- Attachment: HADOOP-8936-branch-1-win.patch Local users should be able to query a domain user's groups on Windows - Key: HADOOP-8936 URL: https://issues.apache.org/jira/browse/HADOOP-8936 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Chuan Liu Assignee: Chuan Liu Priority: Minor Fix For: 1-win Attachments: HADOOP-8936-branch-1-win.patch When Hadoop run by a local user, and a domain user submit a job, Hadoop will need to get the local groups for the domain user. This fails in 'winutils' now because we tried to query domain controller for domain users and local users does not have the permission to do so. We should fix the problem so that local users should be able to query a domain user's local groups. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh
[ https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477519#comment-13477519 ] Matt Foley commented on HADOOP-8924: Hi Aaron, I advised Chris to use Python. As a general matter I think we should stay cross-platform wherever it makes sense to do so. In other words, if something is obviously totally platform-dependent, then go ahead and do two versions, one in shell for Linux, and one in powershell or cmd for Windows. However, where it can be platform-independent, I think we should use a platform-independent scripting language to write a single script (which may include some conditional code for platform-dependent cases if necessary). Obviously it works fine in this case. Python is only one possibility, of course. However it seemed a reasonable choice. It's object-oriented, with reasonable IDE support. It's free (like beer and speech), available on essentially all platforms, and in good odour with the OSS community. The only other evident candidate would be Ruby, but I think more people know Python than Ruby, altho I can't substantiate that. (Altho there is the [Tiobe Index|http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html], which I don't claim is authoritative.) This is worth discussing, because as we integrate the Windows port, we'll have lots of opportunity to follow whatever model we agree on. Shall we continue here, or pop it to a common-dev thread? And I agree with you that whatever answer we agree on should be documented as a build dependency. Hadoop Common creating package-info.java must not depend on sh -- Key: HADOOP-8924 URL: https://issues.apache.org/jira/browse/HADOOP-8924 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: trunk-win Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: trunk-win Attachments: HADOOP-8924-branch-trunk-win.patch Currently, the build process relies on saveVersion.sh to generate package-info.java with a version annotation. The sh binary may not be available on all developers' machines (e.g. Windows without Cygwin). This issue tracks removal of that dependency in Hadoop Common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8936) Local users should be able to query a domain user's groups on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477549#comment-13477549 ] Ivan Mitic commented on HADOOP-8936: Simple fix, looks good to me, +1 Local users should be able to query a domain user's groups on Windows - Key: HADOOP-8936 URL: https://issues.apache.org/jira/browse/HADOOP-8936 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Chuan Liu Assignee: Chuan Liu Priority: Minor Fix For: 1-win Attachments: HADOOP-8936-branch-1-win.patch When Hadoop run by a local user, and a domain user submit a job, Hadoop will need to get the local groups for the domain user. This fails in 'winutils' now because we tried to query domain controller for domain users and local users does not have the permission to do so. We should fix the problem so that local users should be able to query a domain user's local groups. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh
[ https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477557#comment-13477557 ] Aaron T. Myers commented on HADOOP-8924: Hi Matt, a discussion on common-dev@ makes sense to me. I don't feel strongly about what the right answer is, it just wasn't obvious to me that it's python. If it's determined to be python, then that's fine by me. And yes, whenever this is resolved, we should definitely file a JIRA to update BUILDING.txt accordingly. Hadoop Common creating package-info.java must not depend on sh -- Key: HADOOP-8924 URL: https://issues.apache.org/jira/browse/HADOOP-8924 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: trunk-win Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: trunk-win Attachments: HADOOP-8924-branch-trunk-win.patch Currently, the build process relies on saveVersion.sh to generate package-info.java with a version annotation. The sh binary may not be available on all developers' machines (e.g. Windows without Cygwin). This issue tracks removal of that dependency in Hadoop Common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8925) Remove packaging
[ https://issues.apache.org/jira/browse/HADOOP-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HADOOP-8925: Attachment: hadoop-8925.txt All +1s on the dev list. Patch attached, generated via svn rm the three package directories. Remove packaging Key: HADOOP-8925 URL: https://issues.apache.org/jira/browse/HADOOP-8925 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Attachments: hadoop-8925.txt Per discussion on HADOOP-8809, now that Bigtop is TLP and supports Hadoop v2 let's remove the Hadoop packaging from trunk and branch-2. We should remove it anyway since it no longer part of the build post mavenization, was not updated post MR1 (there's no MR2/YARN packaging) and is not maintained. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8925) Remove packaging
[ https://issues.apache.org/jira/browse/HADOOP-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HADOOP-8925: Status: Patch Available (was: Open) Remove packaging Key: HADOOP-8925 URL: https://issues.apache.org/jira/browse/HADOOP-8925 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Attachments: hadoop-8925.txt Per discussion on HADOOP-8809, now that Bigtop is TLP and supports Hadoop v2 let's remove the Hadoop packaging from trunk and branch-2. We should remove it anyway since it no longer part of the build post mavenization, was not updated post MR1 (there's no MR2/YARN packaging) and is not maintained. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira