[jira] [Created] (HADOOP-10439) Fix compilation error in branch-2 after HADOOP-10426
Haohui Mai created HADOOP-10439: --- Summary: Fix compilation error in branch-2 after HADOOP-10426 Key: HADOOP-10439 URL: https://issues.apache.org/jira/browse/HADOOP-10439 Project: Hadoop Common Issue Type: Sub-task Reporter: Haohui Mai Assignee: Haohui Mai HADOOP-10426 removes the import of {{java.io.File}} in branch-2, which causes compilation error. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10439) Fix compilation error in branch-2 after HADOOP-10426
[ https://issues.apache.org/jira/browse/HADOOP-10439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai resolved HADOOP-10439. - Resolution: Fixed Fix Version/s: 2.5.0 I've committed it to branch-2. Thanks [~szetszwo] for the review. Fix compilation error in branch-2 after HADOOP-10426 Key: HADOOP-10439 URL: https://issues.apache.org/jira/browse/HADOOP-10439 Project: Hadoop Common Issue Type: Sub-task Components: build Affects Versions: 2.5.0 Reporter: Haohui Mai Assignee: Haohui Mai Fix For: 2.5.0 Attachments: HADOOP-10439.000.patch HADOOP-10426 removes the import of {{java.io.File}} in branch-2, which causes compilation error. -- This message was sent by Atlassian JIRA (v6.2#6252)
Build failed in Jenkins: Hadoop-Common-trunk #1081
See https://builds.apache.org/job/Hadoop-Common-trunk/1081/changes Changes: [wheat9] HDFS-6130. NPE when upgrading namenode from fsimages older than -32. Contributed by Haohui Mai. [arp] HDFS-5910. Enhance DataTransferProtocol to allow per-connection choice of encryption/plain-text. (Contributed by Benoy Antony) [jianhe] YARN-1521. Mark Idempotent/AtMostOnce annotations to the APIs in ApplicationClientProtcol, ResourceManagerAdministrationProtocol and ResourceTrackerProtocol so that they work in HA scenario. Contributed by Xuan Gong [vinodkv] YARN-1867. Fixed a bug in ResourceManager that was causing invalid ACL checks in the web-services after fail-over. Contributed by Vinod Kumar Vavilapalli. [vinodkv] YARN-1452. Added documentation about the configuration and usage of generic application history and the timeline data service. Contributed by Zhijie Shen. [arp] HADOOP-10280. Add files missed in previous checkin. [arp] HADOOP-10280. Make Schedulables return a configurable identity of user or group. (Contributed by Chris Li) [vinodkv] YARN-1866. Fixed an issue with renewal of RM-delegation tokens on restart or fail-over. Contributed by Jian He. [szetszwo] HADOOP-10426. Declare CreateOpts.getOpt(..) with generic type argument, removes unused FileContext.getFileStatus(..) and fixes various javac warnings. [wheat9] HDFS-5196. Provide more snapshot information in WebUI. Contributed by Shinichi Yamashita. -- [...truncated 62611 lines...] Adding reference: maven.local.repository [DEBUG] Initialize Maven Ant Tasks parsing buildfile jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml with URI = jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml from a zip file parsing buildfile jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml from a zip file Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader (parentFirst) +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent loader (parentFirst) +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask Setting project property: test.build.dir - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir Setting project property: test.exclude.pattern - _ Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT Setting project property: test.exclude - _ Setting project property: distMgmtSnapshotsId - apache.snapshots.https Setting project property: project.build.sourceEncoding - UTF-8 Setting project property: java.security.egd - file:///dev/urandom Setting project property: distMgmtSnapshotsUrl - https://repository.apache.org/content/repositories/snapshots Setting project property: distMgmtStagingUrl - https://repository.apache.org/service/local/staging/deploy/maven2 Setting project property: avro.version - 1.7.4 Setting project property: test.build.data - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir Setting project property: commons-daemon.version - 1.0.13 Setting project property: hadoop.common.build.dir - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target Setting project property: testsThreadCount - 4 Setting project property: maven.test.redirectTestOutputToFile - true Setting project property: jdiff.version - 1.0.9 Setting project property: build.platform - Linux-i386-32 Setting project property: project.reporting.outputEncoding - UTF-8 Setting project property: distMgmtStagingName - Apache Release Distribution Repository Setting project property: protobuf.version - 2.5.0 Setting project property: failIfNoTests - false Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH} Setting project property: jersey.version - 1.9 Setting project property: distMgmtStagingId - apache.staging.https Setting project property: distMgmtSnapshotsName - Apache Development Snapshot Repository Setting project property: ant.file - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml [DEBUG] Setting properties with prefix: Setting project property: project.groupId - org.apache.hadoop Setting project property: project.artifactId - hadoop-common-project Setting project property: project.name - Apache Hadoop Common Project Setting project property: project.description - Apache Hadoop Common Project Setting project property: project.version - 3.0.0-SNAPSHOT Setting project
Re: Passive mode for FTPFileSystem
On 25 March 2014 13:34, Michael Howard mich...@newvantagesolutions.comwrote: https://issues.apache.org/jira/browse/HADOOP-8602 Case test needs to use toLower() with a locale; the equalsIgnoreCase()fails in countries where I.toLower() != i I am somewhat puzzled how locale and case sensitivity relate to FTPFileSystem, but I will take a look and do my best to figure it out. there's a case sensitive check for the passive configuration option that would fail in Turkey. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Created] (HADOOP-10440) HarFsInputStream of HarFileSystem, when reading data, computing the position has bug
guodongdong created HADOOP-10440: Summary: HarFsInputStream of HarFileSystem, when reading data, computing the position has bug Key: HADOOP-10440 URL: https://issues.apache.org/jira/browse/HADOOP-10440 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.3.0 Environment: Reporter: guodongdong In the HarFsInputStream of HarFileSystem, when reading data by interface int read(byte[] b), int read(byte[] b, int offset, int len) wille be called and position wille be update, so position need not be update in interface int read(byte[] b) public synchronized int read(byte[] b) throws IOException { int ret = read(b, 0, b.length); if (ret != -1) { position += ret; } return ret; } /** * */ public synchronized int read(byte[] b, int offset, int len) throws IOException { int newlen = len; int ret = -1; if (position + len end) { newlen = (int) (end - position); } // end case if (newlen == 0) return ret; ret = underLyingStream.read(b, offset, newlen); position += ret; return ret; } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10441) Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be correctly processed by Ganglia
Jing Zhao created HADOOP-10441: -- Summary: Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be correctly processed by Ganglia Key: HADOOP-10441 URL: https://issues.apache.org/jira/browse/HADOOP-10441 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.4.0 Reporter: Jing Zhao Assignee: Jing Zhao Priority: Minor The issue is reported by [~dsen]: Recently added Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be correctly processed by Ganglia because its name contains / Proposal: Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit should be renamed to rpc.RetryCache.NameNodeRetryCache.CacheHit Here - org.apache.hadoop.ipc.metrics.RetryCacheMetrics#RetryCacheMetrics {code} RetryCacheMetrics(RetryCache retryCache) { name = RetryCache/+ retryCache.getCacheName(); registry = new MetricsRegistry(name); if (LOG.isDebugEnabled()) { LOG.debug(Initialized + registry); } } {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10442) Group look-up can cause segmentation fault when certain JNI-based mapping module is used.
Kihwal Lee created HADOOP-10442: --- Summary: Group look-up can cause segmentation fault when certain JNI-based mapping module is used. Key: HADOOP-10442 URL: https://issues.apache.org/jira/browse/HADOOP-10442 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.3.0, 2.4.0 Reporter: Kihwal Lee Priority: Blocker When JniBasedUnixGroupsNetgroupMapping or JniBasedUnixGroupsMapping is used, we get segmentation fault very often. The same system ran 2.2 for months without any problem, but as soon as upgrading to 2.3, it started crashing. This resulted in multiple name node crashes per day. The server was running nslcd (nss-pam-ldapd-0.7.5-15.el6_3.2). We did not see this problem on the servers running sssd. There was one change in the C code and it modified the return code handling after getgrouplist() call. If the function returns 0 or a negative value less than -1, it will do realloc() instead of returning failure. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10443) limit symbol visibility in libhdfs-core.so and libyarn-core.so
Colin Patrick McCabe created HADOOP-10443: - Summary: limit symbol visibility in libhdfs-core.so and libyarn-core.so Key: HADOOP-10443 URL: https://issues.apache.org/jira/browse/HADOOP-10443 Project: Hadoop Common Issue Type: Sub-task Reporter: Colin Patrick McCabe Priority: Minor We should avoid exposing all the symbols of libhdfs-core.so and libyarn-core.so. Otherwise, we they conflict with symbols used in the applications using the libraries. This can be done with gcc's symbol visibility directives. Also, we should probably include libuv and libprotobuf-c statically into our libraries, since most distributions don't yet include these libraries, and we don't want to have version issues there. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10444) add pom.xml infrastructure for hadoop-native-core
Colin Patrick McCabe created HADOOP-10444: - Summary: add pom.xml infrastructure for hadoop-native-core Key: HADOOP-10444 URL: https://issues.apache.org/jira/browse/HADOOP-10444 Project: Hadoop Common Issue Type: Sub-task Reporter: Colin Patrick McCabe Add pom.xml infrastructure for hadoop-native-core, so that it builds under Maven. We can look to how we integrated CMake into hadoop-hdfs-project and hadoop-common-project for inspiration here. In the long term, it would be nice to use a Maven plugin here (see HADOOP-8887) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10445) Implement DataTransferProtocol in libhdfs-core.so
Colin Patrick McCabe created HADOOP-10445: - Summary: Implement DataTransferProtocol in libhdfs-core.so Key: HADOOP-10445 URL: https://issues.apache.org/jira/browse/HADOOP-10445 Project: Hadoop Common Issue Type: Sub-task Reporter: Colin Patrick McCabe We need to implement DataTransferProtocol so that we can send and receive data to and from DataNodes. This is different protocol from Hadoop IPC, so it will require a slightly different code path. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10446) native code for reading Hadoop configuration XML files
Colin Patrick McCabe created HADOOP-10446: - Summary: native code for reading Hadoop configuration XML files Key: HADOOP-10446 URL: https://issues.apache.org/jira/browse/HADOOP-10446 Project: Hadoop Common Issue Type: Sub-task Reporter: Colin Patrick McCabe We need to have a way to read Hadoop configuration XML files in the native HDFS and YARN clients. This will allow those clients to discover the locations of NameNodes, YARN daemons, and other configuration settings, etc. etc. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10447) Implement C code for parsing Hadoop / HDFS URIs
Colin Patrick McCabe created HADOOP-10447: - Summary: Implement C code for parsing Hadoop / HDFS URIs Key: HADOOP-10447 URL: https://issues.apache.org/jira/browse/HADOOP-10447 Project: Hadoop Common Issue Type: Sub-task Reporter: Colin Patrick McCabe We need some glue code for parsing Hadoop or HDFS URIs in C. Probably we should just put a small 'Path' wrapper around a URI parsing library like http://uriparser.sourceforge.net/ (BSD licensed) -- This message was sent by Atlassian JIRA (v6.2#6252)