[jira] [Resolved] (HADOOP-8468) Umbrella of enhancements to support different failure and locality topologies
[ https://issues.apache.org/jira/browse/HADOOP-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-8468. Resolution: Fixed > Umbrella of enhancements to support different failure and locality topologies > - > > Key: HADOOP-8468 > URL: https://issues.apache.org/jira/browse/HADOOP-8468 > Project: Hadoop Common > Issue Type: New Feature > Components: ha, io >Affects Versions: 1.0.0, 2.0.0-alpha >Reporter: Junping Du >Assignee: Junping Du >Priority: Major > Attachments: HADOOP-8468-total-v3.patch, HADOOP-8468-total.patch, HVE > User Guide on branch-1(draft ).pdf, HVE_Hadoop World Meetup 2012.pptx, > Proposal for enchanced failure and locality topologies (revised-1.0).pdf, > Proposal for enchanced failure and locality topologies.pdf > > > The current hadoop network topology (described in some previous issues like: > Hadoop-692) works well in classic three-tiers network when it comes out. > However, it does not take into account other failure models or changes in the > infrastructure that can affect network bandwidth efficiency like: > virtualization. > Virtualized platform has following genes that shouldn't been ignored by > hadoop topology in scheduling tasks, placing replica, do balancing or > fetching block for reading: > 1. VMs on the same physical host are affected by the same hardware failure. > In order to match the reliability of a physical deployment, replication of > data across two virtual machines on the same host should be avoided. > 2. The network between VMs on the same physical host has higher throughput > and lower latency and does not consume any physical switch bandwidth. > Thus, we propose to make hadoop network topology extend-able and introduce a > new level in the hierarchical topology, a node group level, which maps well > onto an infrastructure that is based on a virtualized environment. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
Junping Du created HADOOP-15616: --- Summary: Incorporate Tencent Cloud COS File System Implementation Key: HADOOP-15616 URL: https://issues.apache.org/jira/browse/HADOOP-15616 Project: Hadoop Common Issue Type: New Feature Components: fs/cos Reporter: Junping Du Tencent cloud is top 2 cloud vendors in China market and the object store COS is widely used among China’s cloud users but now it is hard for hadoop user to access data laid on COS storage as no native support for COS in Hadoop. This work aims to integrate Tencent cloud COS with Hadoop, just like what we do before for S3, ADL, OSS, etc. With simple configuration, Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15139) [Umbrella] Improvements and fixes for Hadoop shaded client work
Junping Du created HADOOP-15139: --- Summary: [Umbrella] Improvements and fixes for Hadoop shaded client work Key: HADOOP-15139 URL: https://issues.apache.org/jira/browse/HADOOP-15139 Project: Hadoop Common Issue Type: Bug Reporter: Junping Du Priority: Critical In HADOOP-11656, we have made great progress in splitting out third-party dependencies from shaded hadoop client jar (hadoop-client-api), put runtime dependencies in hadoop-client-runtime, and have shaded version of hadoop-client-minicluster for test. However, there are still some left work for this feature to be fully completed: - We don't have a comprehensive documentation to guide downstream projects/users to use shaded JARs instead of previous JARs - We should consider to wrap up hadoop tools (distcp, aws, azure) to have shaded version - More issues could be identified when shaded jars are adopted in more test and production environment, like HADOOP-15137. Let's have this umbrella JIRA to track all efforts that left to improve hadoop shaded client effort. CC [~busbey], [~bharatviswa] and [~vinodkv]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14958) CLONE - Fix source-level compatibility after HADOOP-11252
Junping Du created HADOOP-14958: --- Summary: CLONE - Fix source-level compatibility after HADOOP-11252 Key: HADOOP-14958 URL: https://issues.apache.org/jira/browse/HADOOP-14958 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.3, 2.6.4 Reporter: Junping Du Assignee: Tsuyoshi Ozawa Priority: Blocker Fix For: 2.6.5, 2.7.4 Reported by [~chiwanpark] bq. Since 2.7.3 release, Client.get/setPingInterval is changed from public to package-private. bq. Giraph is one of broken examples for this changes. (https://github.com/apache/giraph/blob/release-1.0/giraph-core/src/main/java/org/apache/giraph/job/GiraphJob.java#L202) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14842) Hadoop 2.8.2 release build process get stuck due to java issue
Junping Du created HADOOP-14842: --- Summary: Hadoop 2.8.2 release build process get stuck due to java issue Key: HADOOP-14842 URL: https://issues.apache.org/jira/browse/HADOOP-14842 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Junping Du Priority: Blocker In my latest 2.8.2 release build (via docker) get failed, and following errors received: {noformat} "/usr/bin/mvn -Dmaven.repo.local=/maven -pl hadoop-maven-plugins -am clean install Error: JAVA_HOME is not defined correctly. We cannot execute /usr/lib/jvm/java-7-oracle/bin/java" {noformat} This looks like related to HADOOP-14474. However, reverting that patch doesn't work here because build progress will get failed earlier in java download/installation - may be as mentioned in HADOOP-14474, some java 7 download address get changed by Oracle. Hard coding my local JAVA_HOME to create-release or Dockerfile doesn't work here although it show correct java home. My suspect so far is we still need to download java 7 from somewhere to make build progress continue in docker building process, but haven't got clue to go through this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14814) Fix incompatible API change on FsServerDefaults to HADOOP-14104
Junping Du created HADOOP-14814: --- Summary: Fix incompatible API change on FsServerDefaults to HADOOP-14104 Key: HADOOP-14814 URL: https://issues.apache.org/jira/browse/HADOOP-14814 Project: Hadoop Common Issue Type: Bug Reporter: Junping Du Assignee: Junping Du Priority: Blocker In HADOOP-14104, we remove the constructor with replacing with more parameters. This will cause API incompatible given FsServerDefaults marked as public. We should fix it before 2.8.2 and 3.0-beta kicked out. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14207) "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-14207: - > "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler > - > > Key: HADOOP-14207 > URL: https://issues.apache.org/jira/browse/HADOOP-14207 > Project: Hadoop Common > Issue Type: Bug > Components: rpc-server >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Blocker > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-14207.001.patch, HADOOP-14207.002.patch, > HADOOP-14207.003.patch, HADOOP-14207.004.patch, HADOOP-14207.005.patch, > HADOOP-14207.006.patch > > > {noformat} > java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not > be constructed. > at > org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89) > at > org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260) > at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582) > at > org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49) > at > org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) > Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source > DecayRpcSchedulerMetrics2.ipc.65110 already exists! > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-13200) Implement customizable and configurable erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-13200: - Trunk build get broken ... Can we make sure build pass before we commit the patch? > Implement customizable and configurable erasure coders > -- > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Tim Yao >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-13200.02.patch, HADOOP-13200.03.patch, > HADOOP-13200.04.patch, HADOOP-13200.05.patch, HADOOP-13200.06.patch, > HADOOP-13200.07.patch, HADOOP-13200.08.patch, HADOOP-13200.09.patch, > HADOOP-13200.10.patch, HADOOP-13200.11.patch > > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-13996) Fix some release build issues
[ https://issues.apache.org/jira/browse/HADOOP-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-13996: - Reopen for jenkins test on branch-2 patch. > Fix some release build issues > - > > Key: HADOOP-13996 > URL: https://issues.apache.org/jira/browse/HADOOP-13996 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Blocker > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-13996.001.patch, hadoop-13996.002.patch, > HADOOP-13996-branch-2.001.patch > > > Found some build issues while doing some test runs with the create-release.sh > script. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters
[ https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-13362. - Resolution: Fixed Target Version/s: 2.7.4 (was: 2.7.4, 2.8.1) I forget we have a different patch - YARN-5190 for 2.8 and after. Resolve it. > DefaultMetricsSystem leaks the source name when a source unregisters > > > Key: HADOOP-13362 > URL: https://issues.apache.org/jira/browse/HADOOP-13362 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.2 >Reporter: Jason Lowe >Assignee: Junping Du >Priority: Blocker > Fix For: 2.7.4 > > Attachments: HADOOP-13362-branch-2.7.patch > > > Ran across a nodemanager that was spending most of its time in GC. Upon > examination of the heap most of the memory was going to the map of names in > org.apache.hadoop.metrics2.lib.UniqueNames. In this case the map had almost > 2 million entries. Looking at a few of the map showed entries like > "ContainerResource_container_e01_1459548490386_8560138_01_002020", > "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc. > Looks like the ContainerMetrics for each container will cause a unique name > to be registered with UniqueNames and the name will never be unregistered. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters
[ https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-13362: - > DefaultMetricsSystem leaks the source name when a source unregisters > > > Key: HADOOP-13362 > URL: https://issues.apache.org/jira/browse/HADOOP-13362 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.2 >Reporter: Jason Lowe >Assignee: Junping Du >Priority: Blocker > Fix For: 2.7.4 > > Attachments: HADOOP-13362-branch-2.7.patch > > > Ran across a nodemanager that was spending most of its time in GC. Upon > examination of the heap most of the memory was going to the map of names in > org.apache.hadoop.metrics2.lib.UniqueNames. In this case the map had almost > 2 million entries. Looking at a few of the map showed entries like > "ContainerResource_container_e01_1459548490386_8560138_01_002020", > "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc. > Looks like the ContainerMetrics for each container will cause a unique name > to be registered with UniqueNames and the name will never be unregistered. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos
[ https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-13119. - Resolution: Fixed Fix Version/s: (was: 2.8.0) 2.8.1 > Web UI error accessing links which need authorization when Kerberos > --- > > Key: HADOOP-13119 > URL: https://issues.apache.org/jira/browse/HADOOP-13119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.4 >Reporter: Jeffrey E Rodriguez >Assignee: Yuanbo Liu > Labels: security > Fix For: 2.7.4, 2.8.1 > > Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, > HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, > HADOOP-13119.005.patch, screenshot-1.png > > > User Hadoop on secure mode. > login as kdc user, kinit. > start firefox and enable Kerberos > access http://localhost:50070/logs/ > Get 403 authorization errors. > only hdfs user could access logs. > Would expect as a user to be able to web interface logs link. > Same results if using curl: > curl -v --negotiate -u tester: http://localhost:50070/logs/ > HTTP/1.1 403 User tester is unauthorized to access this page. > so: > 1. either don't show links if hdfs user is able to access. > 2. provide mechanism to add users to web application realm. > 3. note that we are pass authentication so the issue is authorization to > /logs/ > suspect that /logs/ path is secure in webdescriptor so suspect users by > default don't have access to secure paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos
[ https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-13119: - > Web UI error accessing links which need authorization when Kerberos > --- > > Key: HADOOP-13119 > URL: https://issues.apache.org/jira/browse/HADOOP-13119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.4 >Reporter: Jeffrey E Rodriguez >Assignee: Yuanbo Liu > Labels: security > Fix For: 2.8.0, 2.7.4 > > Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, > HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, > HADOOP-13119.005.patch, screenshot-1.png > > > User Hadoop on secure mode. > login as kdc user, kinit. > start firefox and enable Kerberos > access http://localhost:50070/logs/ > Get 403 authorization errors. > only hdfs user could access logs. > Would expect as a user to be able to web interface logs link. > Same results if using curl: > curl -v --negotiate -u tester: http://localhost:50070/logs/ > HTTP/1.1 403 User tester is unauthorized to access this page. > so: > 1. either don't show links if hdfs user is able to access. > 2. provide mechanism to add users to web application realm. > 3. note that we are pass authentication so the issue is authorization to > /logs/ > suspect that /logs/ path is secure in webdescriptor so suspect users by > default don't have access to secure paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13098) Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case
Junping Du created HADOOP-13098: --- Summary: Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case Key: HADOOP-13098 URL: https://issues.apache.org/jira/browse/HADOOP-13098 Project: Hadoop Common Issue Type: Bug Reporter: Junping Du Assignee: Junping Du Our current logLevel settings: http://deamon_web_service_address/logLevel only accept full Upper Case string of log level that means "Debug" or "debug" is treated as bad log level in setting. I think we should enhance the tools to ignore upper/lower cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-12715) TestValueQueue#testgetAtMostPolicyALL fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-12715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-12715. - Resolution: Fixed Fix Version/s: (was: 2.6.4) > TestValueQueue#testgetAtMostPolicyALL fails intermittently > -- > > Key: HADOOP-12715 > URL: https://issues.apache.org/jira/browse/HADOOP-12715 > Project: Hadoop Common > Issue Type: Test >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 2.8.0, 2.7.3 > > Attachments: HADOOP-12715.01.patch, HADOOP-12715.02.patch, > HADOOP-12715.03.patch > > > The test fails intermittently with the following error. > Error Message > {noformat} > expected:<19> but was:<10> > {noformat} > Stacktrace > {noformat} > java.lang.AssertionError: expected:<19> but was:<10> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.crypto.key.TestValueQueue.testgetAtMostPolicyALL(TestValueQueue.java:149) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HADOOP-12715) TestValueQueue#testgetAtMostPolicyALL fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-12715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-12715: - > TestValueQueue#testgetAtMostPolicyALL fails intermittently > -- > > Key: HADOOP-12715 > URL: https://issues.apache.org/jira/browse/HADOOP-12715 > Project: Hadoop Common > Issue Type: Test >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 2.8.0, 2.7.3, 2.6.4 > > Attachments: HADOOP-12715.01.patch, HADOOP-12715.02.patch, > HADOOP-12715.03.patch > > > The test fails intermittently with the following error. > Error Message > {noformat} > expected:<19> but was:<10> > {noformat} > Stacktrace > {noformat} > java.lang.AssertionError: expected:<19> but was:<10> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.crypto.key.TestValueQueue.testgetAtMostPolicyALL(TestValueQueue.java:149) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12690) Consolidate access of sun.misc.Unsafe
Junping Du created HADOOP-12690: --- Summary: Consolidate access of sun.misc.Unsafe Key: HADOOP-12690 URL: https://issues.apache.org/jira/browse/HADOOP-12690 Project: Hadoop Common Issue Type: Bug Reporter: Junping Du Assignee: Junping Du Per discussion in Hadoop-12630 (https://issues.apache.org/jira/browse/HADOOP-12630?focusedCommentId=15082142=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15082142), we found the access of sun.misc.Unsafe could be problematic for some JVMs in other platforms. Also, hints from other comments, it is better to consolidate it as a helper/utility method to shared with several places (FastByteComparisons, NativeIO, ShortCircuitShm). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-12283) CLONE - Backport Network Topology Extension for Virtualization (HADOOP-8468) to branch-1
[ https://issues.apache.org/jira/browse/HADOOP-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-12283. - Resolution: Duplicate Assignee: (was: Junping Du) CLONE - Backport Network Topology Extension for Virtualization (HADOOP-8468) to branch-1 Key: HADOOP-12283 URL: https://issues.apache.org/jira/browse/HADOOP-12283 Project: Hadoop Common Issue Type: Sub-task Components: ha, io Affects Versions: 1.0.0 Reporter: RacingDawn Labels: features Fix For: 1.2.0 Attachments: HADOOP-8817-v2.patch, HADOOP-8817-v3.patch, HADOOP-8817-v4.patch, HADOOP-8817.patch HADOOP-8468 propose network topology changes for running on virtualized infrastructure, which includes: 1. Add NodeGroup layer in new NetworkTopology (also known as NetworkTopologyWithNodeGroup): HADOOP-8469, HADOOP-8470 2. Update Replica Placement/Removal Policy to reflect new topology layer: HDFS-3498, HDFS-3601 3. Update balancer policy:HDFS-3495 4. Update Task Scheduling Policy to reflect new topology layer and support the case that compute nodes (NodeManager or TaskTracker) and data nodes are separated into different VMs, but still benefit from physical host locality: YARN-18, YARN-19. This JIRA will address the backport work on branch-1 which will be divided into 4 issues/patches in related jira issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HADOOP-8151) Error handling in snappy decompressor throws invalid exceptions
[ https://issues.apache.org/jira/browse/HADOOP-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-8151: Error handling in snappy decompressor throws invalid exceptions --- Key: HADOOP-8151 URL: https://issues.apache.org/jira/browse/HADOOP-8151 Project: Hadoop Common Issue Type: Bug Components: io, native Affects Versions: 1.0.2, 2.0.0-alpha Reporter: Todd Lipcon Assignee: Matt Foley Fix For: 1.0.3, 3.0.0 Attachments: HADOOP-8151-branch-1.0.patch, HADOOP-8151.patch, HADOOP-8151.patch SnappyDecompressor.c has the following code in a few places: {code} THROW(env, Ljava/lang/InternalError, Could not decompress data. Buffer length is too small.); {code} this is incorrect, though, since the THROW macro doesn't need the L before the class name. This results in a ClassNotFoundException for Ljava.lang.InternalError being thrown, instead of the intended exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-8151) Error handling in snappy decompressor throws invalid exceptions
[ https://issues.apache.org/jira/browse/HADOOP-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-8151. Resolution: Fixed Fix Version/s: 2.8.0 Commit patch to branch-2. Error handling in snappy decompressor throws invalid exceptions --- Key: HADOOP-8151 URL: https://issues.apache.org/jira/browse/HADOOP-8151 Project: Hadoop Common Issue Type: Bug Components: io, native Affects Versions: 1.0.2, 2.0.0-alpha Reporter: Todd Lipcon Assignee: Matt Foley Fix For: 3.0.0, 2.8.0, 1.0.3 Attachments: HADOOP-8151-branch-1.0.patch, HADOOP-8151.patch, HADOOP-8151.patch SnappyDecompressor.c has the following code in a few places: {code} THROW(env, Ljava/lang/InternalError, Could not decompress data. Buffer length is too small.); {code} this is incorrect, though, since the THROW macro doesn't need the L before the class name. This results in a ClassNotFoundException for Ljava.lang.InternalError being thrown, instead of the intended exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
[ https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-12033. - Resolution: Duplicate Thanks Zhihai Xu for confirmation on this. Already commit/merge HADOOP-8151 to branch-2, so resolve this JIRA as duplicated. Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect --- Key: HADOOP-12033 URL: https://issues.apache.org/jira/browse/HADOOP-12033 Project: Hadoop Common Issue Type: Bug Reporter: Ivan Mitic Attachments: 0001-HADOOP-12033.patch We have noticed intermittent reducer task failures with the below exception: {code} Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#9 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native Method) at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239) at org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more {code} Usually, the reduce task succeeds on retry. Some of the symptoms are similar to HADOOP-8423, but this fix is already included (this is on Hadoop 2.6). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11962) Sasl message with MD5 challenge text shouldn't be LOG as debug level.
Junping Du created HADOOP-11962: --- Summary: Sasl message with MD5 challenge text shouldn't be LOG as debug level. Key: HADOOP-11962 URL: https://issues.apache.org/jira/browse/HADOOP-11962 Project: Hadoop Common Issue Type: Bug Components: ipc, security Affects Versions: 2.6.0 Reporter: Junping Du Assignee: Junping Du Priority: Critical Some log examples: {noformat} 2014-09-24 05:42:12,975 DEBUG security.SaslRpcServer (SaslRpcServer.java:create(174)) - Created SASL server with mechanism = DIGEST-MD5 2014-09-24 05:42:12,977 DEBUG ipc.Server (Server.java:doSaslReply(1424)) - Sending sasl message state: NEGOTIATE auths { method: TOKEN mechanism: DIGEST-MD5 protocol: serverId: default challenge: realm=\default\,nonce=\yIvZDpbzGGq3yIrMynVKnEv9Z0qw6lxpr9nZxm0r\,qop=\auth\,charset=utf-8,algorithm=md5-sess } ... ... 2014-09-24 06:21:59,146 DEBUG ipc.Server (Server.java:doSaslReply(1424)) - Sending sasl message state: CHALLENGE token: `l\006\t*\206H\206\367\022\001\002\002\002\000o]0[\240\003\002\001\005\241\003\002\001\017\242O0M\240\003\002\001\020\242F\004D#\030\336|kb\232\033V\340\342F\334\230\347\230\362)u!=\215\271\006\244:\244\221vn\215*\323\353\360\350\3006\366\3340\245\371Ri\273\374\307\017\207Z\233\326\217\224!yo$\373\233\315:JsY!^? {noformat} We should get rid of this kind of log in production environment even under debug log level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-10512) Document usage of node-group layer topology
[ https://issues.apache.org/jira/browse/HADOOP-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-10512. - Resolution: Duplicate Assignee: (was: Junping Du) Document usage of node-group layer topology --- Key: HADOOP-10512 URL: https://issues.apache.org/jira/browse/HADOOP-10512 Project: Hadoop Common Issue Type: Sub-task Components: documentation Reporter: Junping Du For work under umbrella of HADOOP-8468, user can enable nodegroup layer between node and rack in some situations. We should document it after YARN-18 and YARN-19 is figured out. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11274) ConcurrentModificationException in Configuration Copy Constructor
Junping Du created HADOOP-11274: --- Summary: ConcurrentModificationException in Configuration Copy Constructor Key: HADOOP-11274 URL: https://issues.apache.org/jira/browse/HADOOP-11274 Project: Hadoop Common Issue Type: Bug Components: conf Reporter: Junping Du Assignee: Junping Du Priority: Critical Exception as below happens in doing some configuration update in parallel: {noformat} java.util.ConcurrentModificationException at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922) at java.util.HashMap$EntryIterator.next(HashMap.java:962) at java.util.HashMap$EntryIterator.next(HashMap.java:960) at java.util.HashMap.putAllForCreate(HashMap.java:554) at java.util.HashMap.init(HashMap.java:298) at org.apache.hadoop.conf.Configuration.init(Configuration.java:703) {noformat} In a constructor of Configuration - public Configuration(Configuration other), the copy of updatingResource data structure in copy constructor is not synchronized properly. Configuration.get() eventually calls loadProperty() where updatingResource gets updated. So, whats happening here is one thread is trying to do copy of Configuration as demonstrated in stack trace and other thread is doing Configuration.get(key) and than ConcurrentModificationException occurs because copying of updatingResource is not synchronized in constructor. We should make the update to updatingResource get synchronized, and also fix other tiny synchronized issues there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11177) Reduce tar ball size for MR over distributed cache
Junping Du created HADOOP-11177: --- Summary: Reduce tar ball size for MR over distributed cache Key: HADOOP-11177 URL: https://issues.apache.org/jira/browse/HADOOP-11177 Project: Hadoop Common Issue Type: Improvement Components: build Reporter: Junping Du Assignee: Junping Du Priority: Critical The current tar ball built from mvn package -Pdist -DskipTests -Dtar is over 160M in size. We need more smaller tar ball pieces for feature like MR over distributed cache to support Rolling update of cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-10729) Add tests for PB RPC in case version mismatch of client and server
Junping Du created HADOOP-10729: --- Summary: Add tests for PB RPC in case version mismatch of client and server Key: HADOOP-10729 URL: https://issues.apache.org/jira/browse/HADOOP-10729 Project: Hadoop Common Issue Type: Test Components: ipc Affects Versions: 2.4.0 Reporter: Junping Du Assignee: Junping Du We have ProtocolInfo specified in protocol interface with version info, but we don't have unit test to verify if/how it works. We should have tests to track this annotation work as expectation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10512) Document usage of node-group layer topology
Junping Du created HADOOP-10512: --- Summary: Document usage of node-group layer topology Key: HADOOP-10512 URL: https://issues.apache.org/jira/browse/HADOOP-10512 Project: Hadoop Common Issue Type: Sub-task Components: documentation Reporter: Junping Du Assignee: Junping Du For work under umbrella of HADOOP-8468, user can enable nodegroup layer between node and rack in some situations. We should document it after YARN-18 and YARN-19 is figured out. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10006) Compilation failure in trunk for o.a.h.fs.swift.util.JSONUtil
Junping Du created HADOOP-10006: --- Summary: Compilation failure in trunk for o.a.h.fs.swift.util.JSONUtil Key: HADOOP-10006 URL: https://issues.apache.org/jira/browse/HADOOP-10006 Project: Hadoop Common Issue Type: Bug Components: fs, util Reporter: Junping Du Priority: Blocker The error is like following: ... [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hadoop-openstack: Compilation failure: Compilation failure: [ERROR] /home/jdu/bdc/hadoop-trunk/hadoop-common/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/JSONUtil.java:[97,33] type parameters of TT cannot be determined; no unique maximal instance exists for type variable T with upper bounds T,java.lang.Object [ERROR] /home/jdu/bdc/hadoop-trunk/hadoop-common/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/JSONUtil.java:[115,33] type parameters of TT cannot be determined; no unique maximal instance exists for type variable T with upper bounds T,java.lang.Object -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-10001) Document topology with NodeGroup layer configuration in branch-1
Junping Du created HADOOP-10001: --- Summary: Document topology with NodeGroup layer configuration in branch-1 Key: HADOOP-10001 URL: https://issues.apache.org/jira/browse/HADOOP-10001 Project: Hadoop Common Issue Type: Bug Reporter: Junping Du Assignee: Junping Du NodeGroup layer in NetworkTopology is supported after 1.2.0 release. We need to document settings in hdfs-site.xml and mapred-site.xml to enable this layer for BlockPlacementPolicy and map task scheduling. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9980) LightWeightGSet's modification field should be volatile so that detect changes by other thread in iteration.
Junping Du created HADOOP-9980: -- Summary: LightWeightGSet's modification field should be volatile so that detect changes by other thread in iteration. Key: HADOOP-9980 URL: https://issues.apache.org/jira/browse/HADOOP-9980 Project: Hadoop Common Issue Type: Bug Components: util Reporter: Junping Du Assignee: Junping Du Fix For: 2.3.0 LightWeightGSet should have a volatile modification field (like: LightWeightHashSet or LightWeight) that is used to detect updates while iterating so they can throw a ConcurrentModificationException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9964) O.A.H.U.ReflectionUtils.printThreadInfo() is not thread-safe which cause TestHttpServer pending 10 minutes or longer.
Junping Du created HADOOP-9964: -- Summary: O.A.H.U.ReflectionUtils.printThreadInfo() is not thread-safe which cause TestHttpServer pending 10 minutes or longer. Key: HADOOP-9964 URL: https://issues.apache.org/jira/browse/HADOOP-9964 Project: Hadoop Common Issue Type: Bug Components: util Reporter: Junping Du Assignee: Junping Du The printThreadInfo() in ReflectionUtils is not thread-safe which cause two or more threads calling this method from StackServlet to deadlock. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9784) Add builder in creating HttpServer
Junping Du created HADOOP-9784: -- Summary: Add builder in creating HttpServer Key: HADOOP-9784 URL: https://issues.apache.org/jira/browse/HADOOP-9784 Project: Hadoop Common Issue Type: Improvement Reporter: Junping Du Assignee: Junping Du There are quite a lot of constructors in class of HttpServer to create instance. Create a builder class to abstract the building steps which helps to avoid more constructors in the future. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9756) Additional cleanup RPC code
Junping Du created HADOOP-9756: -- Summary: Additional cleanup RPC code Key: HADOOP-9756 URL: https://issues.apache.org/jira/browse/HADOOP-9756 Project: Hadoop Common Issue Type: Improvement Components: ipc Reporter: Junping Du Assignee: Junping Du Priority: Minor HADOOP-9754 already did good job to address most of work for cleanup RPC code. Here is some additional work, include: - Remove some unused deprecated code. - Narrow throws Exception to throw some specific exception and remove some unnecessary exceptions - Fix a generic warning and correct spell issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9616) In branch-2, baseline of Javadoc Warnings (specified in test-patch.properties) is mismatch with Javadoc warnings in current codebase
Junping Du created HADOOP-9616: -- Summary: In branch-2, baseline of Javadoc Warnings (specified in test-patch.properties) is mismatch with Javadoc warnings in current codebase Key: HADOOP-9616 URL: https://issues.apache.org/jira/browse/HADOOP-9616 Project: Hadoop Common Issue Type: Bug Reporter: Junping Du Assignee: Junping Du Now the baseline is set to 13 warnings, but they are 29 warnings now. 16 warnings belongs to using Sun proprietary APIs, and 13 warnings is using incorrect link in doc. I think we should at least fix 13 warnings and set the baseline to 16. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-8470) Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup)
[ https://issues.apache.org/jira/browse/HADOOP-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-8470: Reopen this JIRA for backport to branch-2. Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup) Key: HADOOP-8470 URL: https://issues.apache.org/jira/browse/HADOOP-8470 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 1.0.0, 2.0.0-alpha Reporter: Junping Du Assignee: Junping Du Fix For: 3.0.0 Attachments: HADOOP-8470-NetworkTopology-new-impl.patch, HADOOP-8470-NetworkTopology-new-impl-v2.patch, HADOOP-8470-NetworkTopology-new-impl-v3.patch, HADOOP-8470-NetworkTopology-new-impl-v4.patch To support the four-layer hierarchical topology shown in attached figure as a subclass of NetworkTopology, NetworkTopologyWithNodeGroup was developed along with unit tests. NetworkTopologyWithNodeGroup overriding the methods add, remove, and pseudoSortByDistance were the most relevant to support the four-layer topology. The method seudoSortByDistance selects the nodes to use for reading data and sorts the nodes in sequence of node-local, nodegroup-local, rack- local, rack–off. Another slightly change to seudoSortByDistance is to support cases of separation data node and node manager: if the reader cannot be found in NetworkTopology tree (formed by data nodes only), then it will try to sort according to reader's sibling node in the tree. The distance calculation changes the weights from 0 (local), 2 (rack- local), 4 (rack-off) to: 0 (local), 2 (nodegroup-local), 4 (rack-local), 6 (rack-off). The additional node group layer should be specified in the topology script or table mapping, e.g. input 10.1.1.1, output: /rack1/nodegroup1 A subclass on InnerNode, InnerNodeWithNodeGroup, was also needed to support NetworkTopologyWithNodeGroup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-8469) Make NetworkTopology class pluggable
[ https://issues.apache.org/jira/browse/HADOOP-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du reopened HADOOP-8469: Backport this patch to branch-2 Make NetworkTopology class pluggable Key: HADOOP-8469 URL: https://issues.apache.org/jira/browse/HADOOP-8469 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 1.0.0, 2.0.0-alpha Reporter: Junping Du Assignee: Junping Du Fix For: 3.0.0 Attachments: HADOOP-8469-NetworkTopology-pluggable.patch, HADOOP-8469-NetworkTopology-pluggable-v2.patch, HADOOP-8469-NetworkTopology-pluggable-v3.patch, HADOOP-8469-NetworkTopology-pluggable-v4.patch, HADOOP-8469-NetworkTopology-pluggable-v5.patch The class NetworkTopology is where the three-layer hierarchical topology is modeled in the current code base and is instantiated directly by the DatanodeManager and Balancer. To support alternative topologies, changes were make the topology class pluggable, that is to support using a user specified topology class specified in the Hadoop configuration file core-defaul.xml. The user specified topology class is instantiated using reflection in the same manner as other customizable classes in Hadoop. If no use specified topology class is found, the fallback is to use the NetworkTopology to preserve current behavior. To make it possible to reuse code in NetworkTopology several minor changes were made to make the class more extensible. The NetworkTopology class is currently annotated with @InterfaceAudience.LimitedPrivate({HDFS, MapReduce}) and @InterfaceStability.Unstable. The proposed changes in NetworkTopology listed below 1. Some fields were changes from private to protected 2. Added some protected methods so that sub classes could override behavior 3. Added a new method,isNodeGroupAware,to NetworkTopology 4. The inner class InnerNode was made a package protected class to it would be easier to subclass -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9165) Dynamic Resource/Slot Configuration on NM/TT
[ https://issues.apache.org/jira/browse/HADOOP-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-9165. Resolution: Invalid Track this issue in separated JIRA for YARN and MRV1. Dynamic Resource/Slot Configuration on NM/TT Key: HADOOP-9165 URL: https://issues.apache.org/jira/browse/HADOOP-9165 Project: Hadoop Common Issue Type: New Feature Reporter: Junping Du Attachments: Elastic Resources for Hadoop-draft v0.1.pdf The current Hadoop MRV1/YARN resource management logic assumes per node (TT/NM) resource is static during the lifetime of the TT/NM process. Allowing run-time configuration on per node resource will give us finer granularity of resource elasticity. This allows Hadoop workloads to coexist with other workloads on the same hardware efficiently, whether or not the environment is virtualized. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9045) In nodegroup-aware case, make sure nodes are avoided to place replica if some replica are already under the same nodegroup
Junping Du created HADOOP-9045: -- Summary: In nodegroup-aware case, make sure nodes are avoided to place replica if some replica are already under the same nodegroup Key: HADOOP-9045 URL: https://issues.apache.org/jira/browse/HADOOP-9045 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.2-alpha Reporter: Junping Du In previous implementation for HADOOP-8468, 3rd replica is avoid to place on the same nodegroup of 2nd replica. But it didn't provide check on nodegroup of 1st replica, so if 2nd replica's rack is not efficient to place replica, then it is possible to place 3rd and 1st replica within the same node group. We need a change to remove all nodes from available nodes for placing replica if there already replica on the same nodegroup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9047) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
Junping Du created HADOOP-9047: -- Summary: TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022 Key: HADOOP-9047 URL: https://issues.apache.org/jira/browse/HADOOP-9047 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 2.0.2-alpha Reporter: Junping Du In PreCommit-test of HADOOP-9405, this error appears as it still use system's default 022 umask but not the 062 specifying in test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9048) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
Junping Du created HADOOP-9048: -- Summary: TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022 Key: HADOOP-9048 URL: https://issues.apache.org/jira/browse/HADOOP-9048 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 2.0.2-alpha Reporter: Junping Du In PreCommit-test of HADOOP-9405, this error appears as it still use system's default 022 umask but not the 062 specifying in test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9048) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
[ https://issues.apache.org/jira/browse/HADOOP-9048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved HADOOP-9048. Resolution: Duplicate It is duplicated with HADOOP-9047 as double click jira creation based on bad network. TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022 Key: HADOOP-9048 URL: https://issues.apache.org/jira/browse/HADOOP-9048 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 2.0.2-alpha Reporter: Junping Du In PreCommit-test of HADOOP-9405, this error appears as it still use system's default 022 umask but not the 062 specifying in test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8817) Backport Network Topology Extension for Virtualization (HADOOP-8468) to branch-1
Junping Du created HADOOP-8817: -- Summary: Backport Network Topology Extension for Virtualization (HADOOP-8468) to branch-1 Key: HADOOP-8817 URL: https://issues.apache.org/jira/browse/HADOOP-8817 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 1.0.0 Reporter: Junping Du Assignee: Junping Du -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8820) Backport HADOOP-8469 and HADOOP-8470: add NodeGroup layer in new NetworkTopology (also known as NetworkTopologyWithNodeGroup)
Junping Du created HADOOP-8820: -- Summary: Backport HADOOP-8469 and HADOOP-8470: add NodeGroup layer in new NetworkTopology (also known as NetworkTopologyWithNodeGroup) Key: HADOOP-8820 URL: https://issues.apache.org/jira/browse/HADOOP-8820 Project: Hadoop Common Issue Type: New Feature Components: net Affects Versions: 1.0.0 Reporter: Junping Du -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8567) Backport conf servlet with dump running configuration to branch 1.x
Junping Du created HADOOP-8567: -- Summary: Backport conf servlet with dump running configuration to branch 1.x Key: HADOOP-8567 URL: https://issues.apache.org/jira/browse/HADOOP-8567 Project: Hadoop Common Issue Type: New Feature Components: conf Affects Versions: 1.0.3 Reporter: Junping Du Assignee: Junping Du Fix For: 0.21.1, 2.0.1-alpha HADOOP-6408 provide conf servlet that can dump running configuration which great helps admin to trouble shooting the configuration issue. However, that patch works on branch after 0.21 only and should be backport to branch 1.x. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8557) Core Test failed in jekins for patch pre-commit
Junping Du created HADOOP-8557: -- Summary: Core Test failed in jekins for patch pre-commit Key: HADOOP-8557 URL: https://issues.apache.org/jira/browse/HADOOP-8557 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Junping Du Priority: Blocker In jenkins PreCommit build history (https://builds.apache.org/job/PreCommit-HADOOP-Build/), following tests are failed for all recently patches (build-1164,1166,1168,1170): org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover org.apache.hadoop.ha.TestZKFailoverController.testOneOfEverything org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testOneBlock org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testOneBlockPlusOneEntry org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testThreeBlocks org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays.testOneBlock org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays.testOneBlockPlusOneEntry org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays.testThreeBlocks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8542) TestViewFsTrash failed several times on Precommit test
Junping Du created HADOOP-8542: -- Summary: TestViewFsTrash failed several times on Precommit test Key: HADOOP-8542 URL: https://issues.apache.org/jira/browse/HADOOP-8542 Project: Hadoop Common Issue Type: Bug Components: fs, test Reporter: Junping Du I met this error several times before (different patches), the latest one is in HADOOP-8472 which is unrelated to patch. Looks like this error comes and goes, and I cannot reproduce on my local dev environment. The error log for precommit test is as below: junit.framework.AssertionFailedError: -expunge failed expected:0 but was:1 at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.failNotEquals(Assert.java:283) at junit.framework.Assert.assertEquals(Assert.java:64) at junit.framework.Assert.assertEquals(Assert.java:195) at org.apache.hadoop.fs.TestTrash.trashShell(TestTrash.java:322) at org.apache.hadoop.fs.viewfs.TestViewFsTrash.testTrash(TestViewFsTrash.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8526) Fix 13 Javadoc Warning in hadoop-hdfs-raid project
Junping Du created HADOOP-8526: -- Summary: Fix 13 Javadoc Warning in hadoop-hdfs-raid project Key: HADOOP-8526 URL: https://issues.apache.org/jira/browse/HADOOP-8526 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.0.0-alpha Reporter: Junping Du Assignee: Junping Du Priority: Minor -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8475) 4-layer topology (with NodeGroup layer) implementation of Container Assignment and Task Scheduling (for YARN)
Junping Du created HADOOP-8475: -- Summary: 4-layer topology (with NodeGroup layer) implementation of Container Assignment and Task Scheduling (for YARN) Key: HADOOP-8475 URL: https://issues.apache.org/jira/browse/HADOOP-8475 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 2.0.0-alpha, 1.0.0 Reporter: Junping Du Assignee: Junping Du There are several classes in YARN’s container assignment and task scheduling algorithms that related to data locality which were updated to give preference to running a container on the same nodegroup. This section summarized the changes in the patch that provides a new implementation to support a four-layer hierarchy. When the ApplicationMaster makes a resource allocation request to the scheduler of ResourceManager, it will add the node group to the list of attributes in the ResourceRequest. The parameters of the resource request will change from priority, (host, rack, *), memory, #containers to priority, (host, nodegroup, rack, *), memory, #containers. After receiving the ResoureRequest the RM scheduler will assign containers for requests in the sequence of data-local, nodegroup-local, rack-local and off-switch.Then, ApplicationMaster schedules tasks on allocated containers in sequence of data- local, nodegroup-local, rack-local and off-switch. In terms of code changes made to YARN task scheduling, we updated the class ContainerRequestEvent so that applications can requests for containers can include anodegroup. In RM schedulers, FifoScheduler and CapacityScheduler were updated. For the FifoScheduler, the changes were in the method assignContainers. For the Capacity Scheduler the method assignContainersOnNode in the class of LeafQueue was updated. In both changes a new method, assignNodeGroupLocalContainers() was added in between the assignment data-local and rack-local. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8468) Umbrella of enhancements to support different failure and locality topologies
Junping Du created HADOOP-8468: -- Summary: Umbrella of enhancements to support different failure and locality topologies Key: HADOOP-8468 URL: https://issues.apache.org/jira/browse/HADOOP-8468 Project: Hadoop Common Issue Type: Bug Components: ha, io Affects Versions: 2.0.0-alpha, 1.0.0 Reporter: Junping Du Assignee: Junping Du Priority: Critical The current hadoop network topology (described in some previous issues like: Hadoop-692) works well in classic three-tiers network when it comes out. However, it does not take into account other failure models or changes in the infrastructure that can affect network bandwidth efficiency like: virtualization. Virtualized platform has following genes that shouldn't been ignored by hadoop topology in scheduling tasks, placing replica, do balancing or fetching block for reading: 1. VMs on the same physical host are affected by the same hardware failure. In order to match the reliability of a physical deployment, replication of data across two virtual machines on the same host should be avoided. 2. The network between VMs on the same physical host has higher throughput and lower latency and does not consume any physical switch bandwidth. Thus, we propose to make hadoop network topology extend-able and introduce a new level in the hierarchical topology, a node group level, which maps well onto an infrastructure that is based on a virtualized environment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8469) Make NetworkTopology class pluggable and support user specified topology class
Junping Du created HADOOP-8469: -- Summary: Make NetworkTopology class pluggable and support user specified topology class Key: HADOOP-8469 URL: https://issues.apache.org/jira/browse/HADOOP-8469 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 2.0.0-alpha, 1.0.0 Reporter: Junping Du Assignee: Junping Du The class NetworkTopology is where the three-layer hierarchical topology is modeled in the current code base and is instantiated directly by the DatanodeManager and Balancer. To support alternative topologies, changes were make the topology class pluggable, that is to support using a user specified topology class specified in the Hadoop configuration file core-defaul.xml. The user specified topology class is instantiated using reflection in the same manner as other customizable classes in Hadoop. If no use specified topology class is found, the fallback is to use the NetworkTopology to preserve current behavior. To make it possible to reuse code in NetworkTopology several minor changes were made to make the class more extensible. The NetworkTopology class is currently annotated with @InterfaceAudience.LimitedPrivate({HDFS, MapReduce}) and @InterfaceStability.Unstable. The proposed changes in NetworkTopology listed below 1. Some fields were changes from private to protected 2. Added some protected methods so that sub classes could override behavior 3. Added a new method,isNodeGroupAware,to NetworkTopology 4. The inner class InnerNode was made a package protected class to it would be easier to subclass -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8470) Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup)
Junping Du created HADOOP-8470: -- Summary: Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup) Key: HADOOP-8470 URL: https://issues.apache.org/jira/browse/HADOOP-8470 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 2.0.0-alpha, 1.0.0 Reporter: Junping Du Assignee: Junping Du To support the four-layer hierarchical topology shown in attached figure as a subclass of NetworkTopology, NetworkTopologyWithNodeGroup was developed along with unit tests. NetworkTopologyWithNodeGroup overriding the methods add, remove, and pseudoSortByDistance were the most relevant to support the four-layer topology. The method seudoSortByDistance selects the nodes to use for reading data and sorts the nodes in sequence of node-local, nodegroup-local, rack- local, rack–off. Another slightly change to seudoSortByDistance is to support cases of separation data node and node manager: if the reader cannot be found in NetworkTopology tree (formed by data nodes only), then it will try to sort according to reader's sibling node in the tree. The distance calculation changes the weights from 0 (local), 2 (rack- local), 4 (rack-off) to: 0 (local), 2 (nodegroup-local), 4 (rack-local), 6 (rack-off). The additional node group layer should be specified in the topology script or table mapping, e.g. input 10.1.1.1, output: /rack1/nodegroup1 A subclass on InnerNode, InnerNodeWithNodeGroup, was also needed to support NetworkTopologyWithNodeGroup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8471) Make ReplicaPlacementPolicyDefault extensible for reuse code in subclass
Junping Du created HADOOP-8471: -- Summary: Make ReplicaPlacementPolicyDefault extensible for reuse code in subclass Key: HADOOP-8471 URL: https://issues.apache.org/jira/browse/HADOOP-8471 Project: Hadoop Common Issue Type: Sub-task Components: ha, io Affects Versions: 2.0.0-alpha, 1.0.0 Reporter: Junping Du Assignee: Junping Du ReplicaPlacementPolicy is already a pluggable component in Hadoop. A user specified ReplicaPlacementPolicy can be specified in the hdfs-site.xml configuration under the key dfs.block.replicator.classname. However, to make it possible to reuse code in ReplicaPlacementPolicyDefault a few of its methods were changed from private to protected. ReplicaPlacementPolicy and BlockPlacementPolicyDefault are currently annotated with @InterfaceAudience.Private. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8473) Update Balancer to support new NetworkTopology with NodeGroup
Junping Du created HADOOP-8473: -- Summary: Update Balancer to support new NetworkTopology with NodeGroup Key: HADOOP-8473 URL: https://issues.apache.org/jira/browse/HADOOP-8473 Project: Hadoop Common Issue Type: Sub-task Components: util Affects Versions: 2.0.0-alpha, 1.0.0 Reporter: Junping Du Assignee: Junping Du Since the Balancer is a Hadoop Tool, it was updated to be directly aware of four-layer hierarchy instead of creating an alternative Balancer implementation. To accommodate extensibility, a new protected method, doChooseNodesForCustomFaultDomain is now called from the existing chooseNodes method so that a subclass of the Balancer could customize the balancer algotirhm for other failure and locality topologies. An alternative option is to encapsulate the algorithm used for the four-layer hierarchy into a collaborating strategy class. The key changes introduced to support a four-layer hierarchy were to override the algorithm of choosing source, target pairs for balancing. Unit tests were created to test the new algorithm. The algorithm now makes sure to choose the target and source node on the same node group for balancing as the first priority. Then the overall balancing policy is: first doing balancing between nodes within the same nodegroup then the same rack and off rack at last. Also, we need to check no duplicated replicas live in the same node group after balancing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character
Junping Du created HADOOP-8372: -- Summary: normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character Key: HADOOP-8372 URL: https://issues.apache.org/jira/browse/HADOOP-8372 Project: Hadoop Common Issue Type: Bug Components: io, util Affects Versions: 0.23.0, 1.0.0 Reporter: Junping Du Assignee: Junping Du A valid host name can start with numeric value (You can refer RFC952, RFC1123 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. But normalizeHostName() will recognise this hostname as IP address and return directly rather than resolving the real IP address. These nodes will be failed to get correct network topology if topology script/TableMapping only contains their IPs (without hostname). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8304) DNSToSwitchMapping should add interface to resolve individual host besides a list of host
Junping Du created HADOOP-8304: -- Summary: DNSToSwitchMapping should add interface to resolve individual host besides a list of host Key: HADOOP-8304 URL: https://issues.apache.org/jira/browse/HADOOP-8304 Project: Hadoop Common Issue Type: Improvement Components: io Affects Versions: 1.0.0, 2.0.0 Reporter: Junping Du Assignee: Junping Du Fix For: 2.0.0 DNSToSwitchMapping now has only one API to resolve a host list: public ListString resolve(ListString names). But the two major caller: RackResolver.resolve() and DatanodeManager.resolveNetworkLocation() are taking single host name but have to wrapper it to an single entry ArrayList. This is not necessary especially the host has been cached before. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira