Build failed in Jenkins: Hadoop-Common-0.23-Build #1028
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1028/ -- [...truncated 8263 lines...] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.19 sec Running org.apache.hadoop.security.TestCredentials Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec Running org.apache.hadoop.security.token.delegation.TestDelegationToken Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.907 sec Running org.apache.hadoop.security.token.TestToken Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.148 sec Running org.apache.hadoop.security.authorize.TestProxyUsers Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.55 sec Running org.apache.hadoop.security.authorize.TestAccessControlList Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.45 sec Running org.apache.hadoop.security.TestAuthenticationFilter Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.399 sec Running org.apache.hadoop.security.TestJNIGroupsMapping Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.141 sec Running org.apache.hadoop.security.TestProxyUserFromEnv Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.368 sec Running org.apache.hadoop.security.TestUserFromEnv Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.361 sec Running org.apache.hadoop.security.TestSecurityUtil Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.621 sec Running org.apache.hadoop.security.TestGroupFallback Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec Running org.apache.hadoop.security.TestGroupsCaching Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec Running org.apache.hadoop.util.TestStringInterner Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.113 sec Running org.apache.hadoop.util.TestOptions Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec Running org.apache.hadoop.util.TestRunJar Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec Running org.apache.hadoop.util.TestAsyncDiskService Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec Running org.apache.hadoop.util.TestDataChecksum Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec Running org.apache.hadoop.util.TestGenericOptionsParser Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.694 sec Running org.apache.hadoop.util.TestHostsFileReader Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec Running org.apache.hadoop.util.TestIndexedSort Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.539 sec Running org.apache.hadoop.util.TestGenericsUtil Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.261 sec Running org.apache.hadoop.util.TestPureJavaCrc32 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.298 sec Running org.apache.hadoop.util.TestStringUtils Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.137 sec Running org.apache.hadoop.util.TestProtoUtil Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec Running org.apache.hadoop.util.TestDiskChecker Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.496 sec Running org.apache.hadoop.util.TestShell Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.199 sec Running org.apache.hadoop.util.TestShutdownHookManager Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec Running org.apache.hadoop.util.TestJarFinder Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.798 sec Running org.apache.hadoop.util.TestReflectionUtils Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.5 sec Running org.apache.hadoop.io.TestSequenceFile Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.252 sec Running org.apache.hadoop.io.TestSecureIOUtils Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.497 sec Running org.apache.hadoop.io.nativeio.TestNativeIO Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.168 sec Running org.apache.hadoop.io.TestWritable Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec Running org.apache.hadoop.io.TestIOUtils Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec Running org.apache.hadoop.io.TestTextNonUTF8 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec Running org.apache.hadoop.io.TestMD5Hash Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.157 sec Running org.apache.hadoop.io.TestMapFile Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.692 sec Running org.apache.hadoop.io.TestWritableName Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec Running org.apache.hadoop.io.TestSortedMapWritable Tests run: 4, Failures:
Build failed in Jenkins: Hadoop-Common-trunk #1193
See https://builds.apache.org/job/Hadoop-Common-trunk/1193/changes Changes: [arp] HDFS-6798. Add test case for incorrect data node condition during balancing. (Contributed by Benoy Antony) [junping_du] YARN-2051. Fix bug in PBimpls and add more unit tests with reflection. (Contributed by Binglin Chang) [arp] HDFS-3482. Update CHANGES.txt. [arp] HDFS-6797. DataNode logs wrong layoutversion during upgrade. (Contributed by Benoy Antony) [szetszwo] HDFS-6685. Balancer should preserve storage type of replicas. [xgong] YARN-1994. Expose YARN/MR endpoints on multiple interfaces. Contributed by Craig Welch, Milan Potocnik,and Arpit Agarwal [zjshen] YARN-2347. Consolidated RMStateVersion and NMDBSchemaVersion into Version in yarn-server-common. Contributed by Junping Du. -- [...truncated 72002 lines...] [DEBUG] Configuring mojo org.apache.maven.plugins:maven-resources-plugin:2.2:testResources from plugin realm ClassRealm[pluginorg.apache.maven.plugins:maven-resources-plugin:2.2, parent: sun.misc.Launcher$AppClassLoader@53004901] [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-resources-plugin:2.2:testResources' with basic configurator -- [DEBUG] (f) filters = [] [DEBUG] (f) outputDirectory = https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms/target/test-classes [DEBUG] (f) project = MavenProject: org.apache.hadoop:hadoop-kms:3.0.0-SNAPSHOT @ https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms/pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms/src/test/resources, PatternSet [includes: {}, excludes: {}]}}] [DEBUG] -- end configuration -- [INFO] Using default encoding to copy filtered resources. [INFO] [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ hadoop-kms --- [DEBUG] Configuring mojo org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile from plugin realm ClassRealm[pluginorg.apache.maven.plugins:maven-compiler-plugin:2.5.1, parent: sun.misc.Launcher$AppClassLoader@53004901] [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile' with basic configurator -- [DEBUG] (f) basedir = https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms [DEBUG] (f) buildDirectory = https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms/target [DEBUG] (f) classpathElements = [https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms/target/test-classes, https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms/target/classes, https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-minikdc/target/hadoop-minikdc-3.0.0-SNAPSHOT.jar, /home/jenkins/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar, /home/jenkins/.m2/repository/org/apache/directory/server/apacheds-core-api/2.0.0-M15/apacheds-core-api-2.0.0-M15.jar, /home/jenkins/.m2/repository/org/apache/directory/server/apacheds-core-constants/2.0.0-M15/apacheds-core-constants-2.0.0-M15.jar, /home/jenkins/.m2/repository/org/apache/directory/server/apacheds-i18n/2.0.0-M15/apacheds-i18n-2.0.0-M15.jar, /home/jenkins/.m2/repository/org/apache/directory/api/api-i18n/1.0.0-M20/api-i18n-1.0.0-M20.jar, /home/jenkins/.m2/repository/org/apache/directory/api/api-asn1-api/1.0.0-M20/api-asn1-api-1.0.0-M20.jar, /home/jenkins/.m2/repository/org/apache/directory/api/api-ldap-client-api/1.0.0-M20/api-ldap-client-api-1.0.0-M20.jar, /home/jenkins/.m2/repository/org/apache/directory/api/api-ldap-codec-core/1.0.0-M20/api-ldap-codec-core-1.0.0-M20.jar, /home/jenkins/.m2/repository/org/apache/directory/api/api-ldap-extras-aci/1.0.0-M20/api-ldap-extras-aci-1.0.0-M20.jar, /home/jenkins/.m2/repository/org/apache/directory/api/api-ldap-extras-util/1.0.0-M20/api-ldap-extras-util-1.0.0-M20.jar, /home/jenkins/.m2/repository/org/apache/directory/api/api-ldap-model/1.0.0-M20/api-ldap-model-1.0.0-M20.jar, /home/jenkins/.m2/repository/org/apache/directory/api/api-util/1.0.0-M20/api-util-1.0.0-M20.jar, /home/jenkins/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar, /home/jenkins/.m2/repository/net/sf/ehcache/ehcache-core/2.4.4/ehcache-core-2.4.4.jar, /home/jenkins/.m2/repository/org/apache/directory/server/apacheds-interceptor-kerberos/2.0.0-M15/apacheds-interceptor-kerberos-2.0.0-M15.jar, /home/jenkins/.m2/repository/org/apache/directory/server/apacheds-core/2.0.0-M15/apacheds-core-2.0.0-M15.jar, /home/jenkins/.m2/repository/org/apache/directory/server/apacheds-interceptors-admin/2.0.0-M15/apacheds-interceptors-admin-2.0.0-M15.jar,
[jira] [Created] (HADOOP-10920) site plugin couldn't parse index.apt.vm
Ted Yu created HADOOP-10920: --- Summary: site plugin couldn't parse index.apt.vm Key: HADOOP-10920 URL: https://issues.apache.org/jira/browse/HADOOP-10920 Project: Hadoop Common Issue Type: Bug Reporter: Ted Yu Priority: Minor From log of https://builds.apache.org/job/Hadoop-Common-trunk/1193 : {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.3:site (docs) on project hadoop-kms: Error during page generation: Error parsing 'https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm': line [126] expected SECTION2, found SECTION3 - [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.3:site (docs) on project hadoop-kms: Error during page generation at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:108) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:76) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:116) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213) at org.apache.maven.cli.MavenCli.main(MavenCli.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) Caused by: org.apache.maven.plugin.MojoExecutionException: Error during page generation at org.apache.maven.plugins.site.SiteMojo.execute(SiteMojo.java:143) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:133) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) ... 19 more Caused by: org.apache.maven.doxia.siterenderer.RendererException: Error parsing 'https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm': line [126] expected SECTION2, found SECTION3 at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.renderDocument(DefaultSiteRenderer.java:414) at org.apache.maven.doxia.siterenderer.DoxiaDocumentRenderer.renderDocument(DoxiaDocumentRenderer.java:53) at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.renderModule(DefaultSiteRenderer.java:319) at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.render(DefaultSiteRenderer.java:135) at org.apache.maven.plugins.site.SiteMojo.renderLocale(SiteMojo.java:175) at org.apache.maven.plugins.site.SiteMojo.execute(SiteMojo.java:138) ... 21 more Caused by: org.apache.maven.doxia.module.apt.AptParseException: expected SECTION2, found SECTION3 at org.apache.maven.doxia.module.apt.AptParser.parse(AptParser.java:235) at org.apache.maven.doxia.DefaultDoxia.parse(DefaultDoxia.java:65) at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.renderDocument(DefaultSiteRenderer.java:406) ... 26 more Caused by: org.apache.maven.doxia.module.apt.AptParseException: expected SECTION2, found SECTION3 at org.apache.maven.doxia.module.apt.AptParser.expectedBlock(AptParser.java:1404) at org.apache.maven.doxia.module.apt.AptParser.traverseSection(AptParser.java:787) at org.apache.maven.doxia.module.apt.AptParser.traverseSection(AptParser.java:823) at org.apache.maven.doxia.module.apt.AptParser.traverseBody(AptParser.java:765) at org.apache.maven.doxia.module.apt.AptParser.parse(AptParser.java:230) ... 28 more [ERROR] {code} -- This message was sent by
[jira] [Resolved] (HADOOP-584) Calling shell scripts from build.xml discriminates Windows user minority.
[ https://issues.apache.org/jira/browse/HADOOP-584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HADOOP-584. - Resolution: Fixed Calling shell scripts from build.xml discriminates Windows user minority. - Key: HADOOP-584 URL: https://issues.apache.org/jira/browse/HADOOP-584 Project: Hadoop Common Issue Type: Bug Components: scripts Affects Versions: 0.7.0 Environment: Windows Reporter: Konstantin Shvachko This is introduced by HADOOP-567. The problem is that now I cannot even build hadoop in Eclipse under Windows unless I run it under Cygwin. This is in a way the same as calling make in build.xml, which was recently fixed HADOOP-537. I think we should not introducing more dependencies on Cygwin just in order to show something in Web UI. I also don't remember we claimed that Cygwin or anything else except for Ant is required for Hadoop builds. Is there another way of solving this? build.xml defines version property, Ant has user.name property. URL is not changing very often. Or may be the web ui should obtain these properties in run-time. Or may be the Packaging is a better solution, as you guys discussed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10921) MapFile.fix fails silently when file is block compressed
Johannes Herr created HADOOP-10921: -- Summary: MapFile.fix fails silently when file is block compressed Key: HADOOP-10921 URL: https://issues.apache.org/jira/browse/HADOOP-10921 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.20.2 Reporter: Johannes Herr MapFile provides a method 'fix' to reconstruct missing 'index' files. If the 'data' file is block compressed the method will compute offsets that are to large, which will lead to keys not being found in the mapfile. (See the attached test case.) Tested against 0.20.2 but the trunk version looks like it has the same problem. Cause of the problem is, that 'dataReader.getPosition()' is used to find the offset to write for the next entry that should be indexed. When the file is block compressed however 'dataReader.getPosition()' seems to return the position of the next compressed block, not of block that contains the last entry. This position will thus be to large in most cases and a seek operation with this offset will incorrectly report the key as not present. I think its not obvious how to fix it, since the SequenceFile-Reader does not provide the offset of the currently buffered entries. I've experimented with watching the offset change and that seems to work mostly, but is quiet ugly and not exact in edge cases. The method should probably throw an exception when the 'data' file is block compressed instead of silently creating invalid files. A workaround for block compressed files is to read the sequence file and write the entries to a new mapfile and then replace the old file. This also avoids the problems mentioned below. A few side notes: 1. The 'index' files created by the fix-method are not block compressed (which the 'index' files created by MapFile Writer always are, since the 'index' file is read completely anyway). 2. The fix method does not index the first entry, the Writer does. 3. The header offset is not used. -- This message was sent by Atlassian JIRA (v6.2#6252)
Write rights request to wiki
Dear admins, I would like to be granted write access to the hadoop Wiki for my wiki user JeanBaptisteNote. The practical reason is to contribute our company's cluster (Company is Criteo, cluster is a 20PB, 800 nodes YARN cluster) to the PoweredBy page ( https://wiki.apache.org/hadoop/PoweredBy). Kind regards, Jean-Baptiste
[jira] [Created] (HADOOP-10922) User documentation for CredentialShell
Andrew Wang created HADOOP-10922: Summary: User documentation for CredentialShell Key: HADOOP-10922 URL: https://issues.apache.org/jira/browse/HADOOP-10922 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.6.0 Reporter: Andrew Wang The CredentialShell needs end user documentation for the website. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10923) User documentation for KeyShell
Andrew Wang created HADOOP-10923: Summary: User documentation for KeyShell Key: HADOOP-10923 URL: https://issues.apache.org/jira/browse/HADOOP-10923 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang The KeyShell needs user documentation for the website. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Branching 2.5
Folks, I think we are very close to voting on RC0. Just wanted to check one (hopefully) last thing. I am unable to verify the signed maven artifacts are actually deployed. To deploy the artifacts, I did the following and it looked like it ran fine. 1. .m2/settings.xml - server-id is apache.staging.https 2. mvn deploy -Psign,src,dist -Dmaven.test.skip.exec=true -Dcontainer-executor.conf.dir=/etc/hadoop/conf -Dgpg.passphrase=my-passphrase However, I don't see it here - https://repository.apache.org. How do I verify this? Thanks Karthik On Wed, Jul 30, 2014 at 4:30 PM, Karthik Kambatla ka...@cloudera.com wrote: Thanks to Andrew's patch on HADOOP-10910, I am able to build an RC. On Wed, Jul 30, 2014 at 1:59 AM, Ted Yu yuzhih...@gmail.com wrote: Adding bui...@apache.org Cheers On Jul 30, 2014, at 12:52 AM, Andrew Wang andrew.w...@cloudera.com wrote: Alright, dug around some more and I think it's that FINDBUGS_HOME is not being set correctly. I downloaded and extracted Findbugs 1.3.9, pointed FINDBUGS_HOME at it, and the build worked after that. I don't know what's up with the default maven build, it'd be great if someone could check. Can someone with access to the build machines check this? As a side note, I think 1.3.9 was released in 2009. It'd be nice to catch up with the last 5 years of static analysis :) On Tue, Jul 29, 2014 at 11:36 PM, Andrew Wang andrew.w...@cloudera.com wrote: I looked in the log, it also looks like findbugs is OOMing: [java] Exception in thread main java.lang.OutOfMemoryError: GC overhead limit exceeded [java]at edu.umd.cs.findbugs.ba.Path.grow(Path.java:263) [java]at edu.umd.cs.findbugs.ba.Path.copyFrom(Path.java:113) [java]at edu.umd.cs.findbugs.ba.Path.duplicate(Path.java:103) [java]at edu.umd.cs.findbugs.ba.obl.State.duplicate(State.java:65) This is quite possibly related, since there's an error at the end like this: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: input file /home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/findbugsXml.xml does not exist [ERROR] around Ant part ...xslt style=/home/jenkins/tools/findbugs/latest/src/xsl/default.xsl in=/home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/findbugsXml.xml out=/home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/site/findbugs.html/... @ 44:368 in /home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml I'll try to figure out how to increase this, but if anyone else knows, feel free to chime in. On Tue, Jul 29, 2014 at 5:41 PM, Karthik Kambatla ka...@cloudera.com wrote: Devs, I created branch-2.5.0 and was trying to cut an RC, but ran into issues with creating one. If anyone knows what is going on, please help me out. I ll continue looking into it otherwise. https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/24/console is the build that failed. It appears the issue is because it can't find Null.java. I run into the same issue locally as well, even with branch-2.4.1. So, I wonder if I should be doing anything else to create the RC instead? Thanks Karthik On Sun, Jul 27, 2014 at 11:09 AM, Zhijie Shen zs...@hortonworks.com wrote: I've just committed YARN-2247, which is the last 2.5 blocker from YARN. On Sat, Jul 26, 2014 at 5:02 AM, Karthik Kambatla ka...@cloudera.com wrote: A quick update: All remaining blockers are on the verge of getting committed. Once that is done, I plan to cut a branch for 2.5.0 and get an RC out hopefully this coming Monday. On Fri, Jul 25, 2014 at 12:32 PM, Andrew Wang andrew.w...@cloudera.com wrote: One thing I forgot, the release note activities are happening at HADOOP-10821. If you have other things you'd like to see mentioned, feel free to leave a comment on the JIRA and I'll try to include it. Thanks, Andrew On Fri, Jul 25, 2014 at 12:28 PM, Andrew Wang andrew.w...@cloudera.com wrote: I just went through and fixed up the HDFS and Common CHANGES.txt for 2.5.0. As a friendly reminder, please try to put things under the correct section :) We have subsections for the xattr changes in HDFS-2006 and HADOOP-10514, and there were some unrelated JIRAs appended to the end. I'd also encourage committers to be more liberal with their use of the NEW FEATURES section. I'm helping Karthik write up the 2.5 release notes, and I'm using NEW FEATURES to fill it out. When looking through the
Shell access to build machines
Hi all, I was wondering who has access to the build machines. Since moving to the new machines, we've had a number of what look like environmental issues leading to flaky builds. One still outstanding example is HDFS-6694, and a number of other times it would have been nice to poke around manually. Is it possible for more of us to get access to expedite some of this debugging? How would we go about requesting access? Thanks, Andrew
Re: Shell access to build machines
Adding builds@apache Cheers On Fri, Aug 1, 2014 at 12:04 PM, Andrew Wang andrew.w...@cloudera.com wrote: Hi all, I was wondering who has access to the build machines. Since moving to the new machines, we've had a number of what look like environmental issues leading to flaky builds. One still outstanding example is HDFS-6694, and a number of other times it would have been nice to poke around manually. Is it possible for more of us to get access to expedite some of this debugging? How would we go about requesting access? Thanks, Andrew
Re: Branching 2.5
On Fri, Aug 1, 2014 at 11:28 AM, Karthik Kambatla ka...@cloudera.com wrote: Folks, I think we are very close to voting on RC0. Just wanted to check one (hopefully) last thing. I am unable to verify the signed maven artifacts are actually deployed. To deploy the artifacts, I did the following and it looked like it ran fine. 1. .m2/settings.xml - server-id is apache.staging.https 2. mvn deploy -Psign,src,dist -Dmaven.test.skip.exec=true -Dcontainer-executor.conf.dir=/etc/hadoop/conf -Dgpg.passphrase=my-passphrase However, I don't see it here - https://repository.apache.org. How do I verify this? When deploy runs, it logs where it is uploading the artifacts too. Do you have the mvn deploy output still Karthik? Maybe that'll help track where mvn hid it on you? St.Ack
[jira] [Created] (HADOOP-10924) LocalDistributedCacheManager for concurrent sqoop processes fails to create unique directories
William Watson created HADOOP-10924: --- Summary: LocalDistributedCacheManager for concurrent sqoop processes fails to create unique directories Key: HADOOP-10924 URL: https://issues.apache.org/jira/browse/HADOOP-10924 Project: Hadoop Common Issue Type: Bug Reporter: William Watson Kicking off many sqoop processes in different threads results in: {code} 2014-08-01 13:47:24 -0400: INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot overwrite non empty destination directory /tmp/hadoop-hadoop/mapred/local/1406915233073 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapred.LocalJobRunner$Job.init(LocalJobRunner.java:163) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) 2014-08-01 13:47:24 -0400: INFO - at java.security.AccessController.doPrivileged(Native Method) 2014-08-01 13:47:24 -0400: INFO - at javax.security.auth.Subject.doAs(Subject.java:415) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.run(Sqoop.java:145) 2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229) 2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.main(Sqoop.java:238) {code} If two are kicked off in the same second. The issue is the following lines of code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: {code} // Generating unique numbers for FSDownload. AtomicLong uniqueNumberGenerator = new AtomicLong(System.currentTimeMillis()); {code} and {code} Long.toString(uniqueNumberGenerator.incrementAndGet())), {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Branching 2.5
Juan and Yongjun just let me know that we need HDFS-6793 because it reverts an incompatible change. I just committed it down to branch-2.5.0. On Fri, Aug 1, 2014 at 12:07 PM, Stack st...@duboce.net wrote: On Fri, Aug 1, 2014 at 11:28 AM, Karthik Kambatla ka...@cloudera.com wrote: Folks, I think we are very close to voting on RC0. Just wanted to check one (hopefully) last thing. I am unable to verify the signed maven artifacts are actually deployed. To deploy the artifacts, I did the following and it looked like it ran fine. 1. .m2/settings.xml - server-id is apache.staging.https 2. mvn deploy -Psign,src,dist -Dmaven.test.skip.exec=true -Dcontainer-executor.conf.dir=/etc/hadoop/conf -Dgpg.passphrase=my-passphrase However, I don't see it here - https://repository.apache.org. How do I verify this? When deploy runs, it logs where it is uploading the artifacts too. Do you have the mvn deploy output still Karthik? Maybe that'll help track where mvn hid it on you? St.Ack
[jira] [Created] (HADOOP-10925) Compilation fails in native link0 function on Windows.
Chris Nauroth created HADOOP-10925: -- Summary: Compilation fails in native link0 function on Windows. Key: HADOOP-10925 URL: https://issues.apache.org/jira/browse/HADOOP-10925 Project: Hadoop Common Issue Type: Bug Components: native Affects Versions: 3.0.0, 2.6.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Blocker HDFS-6482 introduced a new native code function for creating hard links. The Windows implementation of this function does not compile due to an incorrect call to {{CreateHardLink}}. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Shell access to build machines
Giri followed up with me, and a search of the mail archives reveals this earlier email from Rajiv (raj...@yahoo-inc.com): = Yahoo hosts build slaves for Hadoop and other ASF projects. Since Hadoop development community requested to keep slaves separated from other ASF projects, Infra team doesn't want to maintain it. Machines in Y! data center (minerva, vesta, etc) are in apache.org domain and are managed by ASF infra. The rest, asf*.ygridcore.net are managed by me. Most of the accounts those boxes are not Yahoos. Any commiter, independent of their employer, can get access and the expectation the it would only be used for builds. This restriction was put in to stop people from using these as their personal dev boxes. = Not sure if this situation has changed with the new boxes, but I emailed Rajiv about access. Thanks all! On Fri, Aug 1, 2014 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote: Adding builds@apache Cheers On Fri, Aug 1, 2014 at 12:04 PM, Andrew Wang andrew.w...@cloudera.com wrote: Hi all, I was wondering who has access to the build machines. Since moving to the new machines, we've had a number of what look like environmental issues leading to flaky builds. One still outstanding example is HDFS-6694, and a number of other times it would have been nice to poke around manually. Is it possible for more of us to get access to expedite some of this debugging? How would we go about requesting access? Thanks, Andrew
[jira] [Created] (HADOOP-10926) Improve test-patch.sh to apply binary diffs
Andrew Wang created HADOOP-10926: Summary: Improve test-patch.sh to apply binary diffs Key: HADOOP-10926 URL: https://issues.apache.org/jira/browse/HADOOP-10926 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang The Unix {{patch}} command cannot apply binary diffs as generated via {{git diff --binary}}. This means we cannot get effective test-patch.sh runs when the patch requires adding a binary file. We should consider using a different patch method. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Jenkins problem or patch problem?
I filed a JIRA to track applying binary patches, posted some notes from a quick investigation: https://issues.apache.org/jira/browse/HADOOP-10926 On Tue, Jul 29, 2014 at 10:37 AM, Andrew Wang andrew.w...@cloudera.com wrote: We could change test-patch to use git apply instead of the patch command. I know a lot of us use git apply when committing, so it seems like a safe change. On Tue, Jul 29, 2014 at 1:44 AM, Niels Basjes ni...@basjes.nl wrote: I think this behavior is better. This way you know you patch was not (fully) applied. It would be even better if there was a way to submit a patch with a binary file in there. Niels On Mon, Jul 28, 2014 at 11:29 PM, Andrew Wang andrew.w...@cloudera.com wrote: I had the same issue on HDFS-6696, patch generated with git diff --binary. I ended up making the same patch without the binary part and it could be applied okay. This does differ in behavior from the old boxes, which were still able to apply the non-binary parts of a binary-diff. On Mon, Jul 28, 2014 at 3:06 AM, Niels Basjes ni...@basjes.nl wrote: For my test case I needed a something.txt.gz file However for this specific test this file will never be actually read, it just has to be there and it must be a few bytes in size. Because binary files do't work I simply created a file containging Hello world Now this isn't a gzip file at all, yet for my test it does enough to make the test work as intended. So in fact I didn't solve the binary attachment problem at all. On Mon, Jul 28, 2014 at 1:40 AM, Ted Yu yuzhih...@gmail.com wrote: Mind telling us how you included the binary file in your svn patch ? Thanks On Sun, Jul 27, 2014 at 12:27 PM, Niels Basjes ni...@basjes.nl wrote: I created a patch file with SVN and it works now. I dare to ask: Are there any git created patch files that work? On Sun, Jul 27, 2014 at 9:44 PM, Niels Basjes ni...@basjes.nl wrote: I'll look for a workaround regarding the binary file. Thanks. On Sun, Jul 27, 2014 at 9:07 PM, Ted Yu yuzhih...@gmail.com wrote: Similar problem has been observed for HBase patches. Have you tried attaching level 1 patch ? For the binary file, to my knowledge, 'git apply' is able to handle it but hadoop is currently using svn. Cheers On Sun, Jul 27, 2014 at 11:01 AM, Niels Basjes ni...@basjes.nl wrote: Hi, I just submitted a patch and Jenkins said it failed to apply the patch. But when I look at the console output https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4771//console it says: At revision 1613826. MAPREDUCE-2094 patch is being downloaded at Sun Jul 27 18:50:44 UTC 2014 fromhttp:// issues.apache.org/jira/secure/attachment/12658034/MAPREDUCE-2094-20140727.patch*cp : cannot stat '/home/jenkins/buildSupport/lib/*': No such file or directory *The patch does not appear to apply with p0 to p2 PATCH APPLICATION FAILED Now I do have a binary file (for the unit test) in this patch, perhaps I did something wrong? Or is this problem caused by the error I highlighted? What can I do to fix this? -- Best regards / Met vriendelijke groeten, Niels Basjes -- Best regards / Met vriendelijke groeten, Niels Basjes -- Best regards / Met vriendelijke groeten, Niels Basjes -- Best regards / Met vriendelijke groeten, Niels Basjes -- Best regards / Met vriendelijke groeten, Niels Basjes
[VOTE] Release Apache Hadoop 2.5.0
Hi folks, I have put together a release candidate (rc0) for Hadoop 2.5.0. The RC is available at: http://people.apache.org/~kasha/hadoop-2.5.0-RC0/ The RC tag in svn is here: https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.5.0-rc0/ The maven artifacts are staged at https://repository.apache.org/content/repositories/orgapachehadoop-1007/ You can find my public key at: http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS Please try the release and vote. The vote will run for 5 days. Thanks Karthik
Re: [VOTE] Release Apache Hadoop 2.5.0
I am obviously a +1 (non-binding). I brought a pseudo-distributed cluster and ran a few HDFS commands and MR jobs. On Fri, Aug 1, 2014 at 4:16 PM, Karthik Kambatla ka...@cloudera.com wrote: Hi folks, I have put together a release candidate (rc0) for Hadoop 2.5.0. The RC is available at: http://people.apache.org/~kasha/hadoop-2.5.0-RC0/ The RC tag in svn is here: https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.5.0-rc0/ The maven artifacts are staged at https://repository.apache.org/content/repositories/orgapachehadoop-1007/ You can find my public key at: http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS Please try the release and vote. The vote will run for 5 days. Thanks Karthik
[DISCUSS] Migrate from svn to git for source control?
Hi folks, From what I hear, a lot of devs use the git mirror for development/reviews and use subversion primarily for checking code in. I was wondering if it would make more sense just to move to git. In addition to subjective liking of git, I see the following advantages in our workflow: 1. Feature branches - it becomes easier to work on them and keep rebasing against the latest trunk. 2. Cherry-picks between branches automatically ensures the exact same commit message and tracks the lineage as well. 3. When cutting new branches and/or updating maven versions etc., it allows doing all the work locally before pushing it to the main branch. 4. Opens us up to potentially using other code-review tools. (Gerrit?) 5. It is just more convenient. I am sure this was brought up before in different capacities. I believe the support for git in ASF is healthy now and several downstream projects have moved. Again, from what I hear, ASF INFRA folks make the migration process fairly easy. What do you all think? Thanks Karthik
[jira] [Created] (HADOOP-10927) Ran `hadoop credential` expecting usage, got NPE instead
Josh Elser created HADOOP-10927: --- Summary: Ran `hadoop credential` expecting usage, got NPE instead Key: HADOOP-10927 URL: https://issues.apache.org/jira/browse/HADOOP-10927 Project: Hadoop Common Issue Type: Bug Components: security Reporter: Josh Elser Priority: Minor {noformat} $ hadoop credential java.lang.NullPointerException at org.apache.hadoop.security.alias.CredentialShell.run(CredentialShell.java:67) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.security.alias.CredentialShell.main(CredentialShell.java:420) {noformat} Ran a no-arg version of {{hadoop credential}} expecting to get the usage/help message (like other commands act), and got the above exception instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Branching 2.5
Tom White helped me figure it out, and closed the Nexus repository for me. Thanks Tom for helping and Stack for offering to help. On Fri, Aug 1, 2014 at 11:28 AM, Karthik Kambatla ka...@cloudera.com wrote: Folks, I think we are very close to voting on RC0. Just wanted to check one (hopefully) last thing. I am unable to verify the signed maven artifacts are actually deployed. To deploy the artifacts, I did the following and it looked like it ran fine. 1. .m2/settings.xml - server-id is apache.staging.https 2. mvn deploy -Psign,src,dist -Dmaven.test.skip.exec=true -Dcontainer-executor.conf.dir=/etc/hadoop/conf -Dgpg.passphrase=my-passphrase However, I don't see it here - https://repository.apache.org. How do I verify this? Thanks Karthik On Wed, Jul 30, 2014 at 4:30 PM, Karthik Kambatla ka...@cloudera.com wrote: Thanks to Andrew's patch on HADOOP-10910, I am able to build an RC. On Wed, Jul 30, 2014 at 1:59 AM, Ted Yu yuzhih...@gmail.com wrote: Adding bui...@apache.org Cheers On Jul 30, 2014, at 12:52 AM, Andrew Wang andrew.w...@cloudera.com wrote: Alright, dug around some more and I think it's that FINDBUGS_HOME is not being set correctly. I downloaded and extracted Findbugs 1.3.9, pointed FINDBUGS_HOME at it, and the build worked after that. I don't know what's up with the default maven build, it'd be great if someone could check. Can someone with access to the build machines check this? As a side note, I think 1.3.9 was released in 2009. It'd be nice to catch up with the last 5 years of static analysis :) On Tue, Jul 29, 2014 at 11:36 PM, Andrew Wang andrew.w...@cloudera.com wrote: I looked in the log, it also looks like findbugs is OOMing: [java] Exception in thread main java.lang.OutOfMemoryError: GC overhead limit exceeded [java]at edu.umd.cs.findbugs.ba.Path.grow(Path.java:263) [java]at edu.umd.cs.findbugs.ba.Path.copyFrom(Path.java:113) [java]at edu.umd.cs.findbugs.ba.Path.duplicate(Path.java:103) [java]at edu.umd.cs.findbugs.ba.obl.State.duplicate(State.java:65) This is quite possibly related, since there's an error at the end like this: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: input file /home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/findbugsXml.xml does not exist [ERROR] around Ant part ...xslt style=/home/jenkins/tools/findbugs/latest/src/xsl/default.xsl in=/home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/findbugsXml.xml out=/home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/site/findbugs.html/... @ 44:368 in /home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml I'll try to figure out how to increase this, but if anyone else knows, feel free to chime in. On Tue, Jul 29, 2014 at 5:41 PM, Karthik Kambatla ka...@cloudera.com wrote: Devs, I created branch-2.5.0 and was trying to cut an RC, but ran into issues with creating one. If anyone knows what is going on, please help me out. I ll continue looking into it otherwise. https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/24/console is the build that failed. It appears the issue is because it can't find Null.java. I run into the same issue locally as well, even with branch-2.4.1. So, I wonder if I should be doing anything else to create the RC instead? Thanks Karthik On Sun, Jul 27, 2014 at 11:09 AM, Zhijie Shen zs...@hortonworks.com wrote: I've just committed YARN-2247, which is the last 2.5 blocker from YARN. On Sat, Jul 26, 2014 at 5:02 AM, Karthik Kambatla ka...@cloudera.com wrote: A quick update: All remaining blockers are on the verge of getting committed. Once that is done, I plan to cut a branch for 2.5.0 and get an RC out hopefully this coming Monday. On Fri, Jul 25, 2014 at 12:32 PM, Andrew Wang andrew.w...@cloudera.com wrote: One thing I forgot, the release note activities are happening at HADOOP-10821. If you have other things you'd like to see mentioned, feel free to leave a comment on the JIRA and I'll try to include it. Thanks, Andrew On Fri, Jul 25, 2014 at 12:28 PM, Andrew Wang andrew.w...@cloudera.com wrote: I just went through and fixed up the HDFS and Common CHANGES.txt for 2.5.0. As a friendly reminder, please try to put things under the correct section :) We have subsections for the xattr changes in HDFS-2006 and HADOOP-10514, and there were some unrelated JIRAs appended to the
Re: [VOTE] Release Apache Hadoop 2.5.0
Missed Andrew's email in the other thread. Looks like we might need HDFS-6793. I ll wait to see if others find any other issues, so I can address them all together. On Fri, Aug 1, 2014 at 4:25 PM, Karthik Kambatla ka...@cloudera.com wrote: I am obviously a +1 (non-binding). I brought a pseudo-distributed cluster and ran a few HDFS commands and MR jobs. On Fri, Aug 1, 2014 at 4:16 PM, Karthik Kambatla ka...@cloudera.com wrote: Hi folks, I have put together a release candidate (rc0) for Hadoop 2.5.0. The RC is available at: http://people.apache.org/~kasha/hadoop-2.5.0-RC0/ The RC tag in svn is here: https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.5.0-rc0/ The maven artifacts are staged at https://repository.apache.org/content/repositories/orgapachehadoop-1007/ You can find my public key at: http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS Please try the release and vote. The vote will run for 5 days. Thanks Karthik
[jira] [Created] (HADOOP-10928) Incorrect usage on `hadoop credential list`
Josh Elser created HADOOP-10928: --- Summary: Incorrect usage on `hadoop credential list` Key: HADOOP-10928 URL: https://issues.apache.org/jira/browse/HADOOP-10928 Project: Hadoop Common Issue Type: Bug Components: security Reporter: Josh Elser Priority: Trivial Attachments: HADOOP-10928.diff {{hadoop credential list}}'s usage message states a mandatory {{alias}} argument. The code does not actually accept an alias. Fix the message. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: [DISCUSS] Migrate from svn to git for source control?
Thanks for starting this thread Karthik! Big +1 from me. I only use svn when I have to commit things or work on the site, otherwise it's always the git mirror or local git repos. Considering that the git mirror works as well as it does, I'd expect this to be a pretty smooth transition. Best, Andrew On Fri, Aug 1, 2014 at 4:43 PM, Karthik Kambatla ka...@cloudera.com wrote: Hi folks, From what I hear, a lot of devs use the git mirror for development/reviews and use subversion primarily for checking code in. I was wondering if it would make more sense just to move to git. In addition to subjective liking of git, I see the following advantages in our workflow: 1. Feature branches - it becomes easier to work on them and keep rebasing against the latest trunk. 2. Cherry-picks between branches automatically ensures the exact same commit message and tracks the lineage as well. 3. When cutting new branches and/or updating maven versions etc., it allows doing all the work locally before pushing it to the main branch. 4. Opens us up to potentially using other code-review tools. (Gerrit?) 5. It is just more convenient. I am sure this was brought up before in different capacities. I believe the support for git in ASF is healthy now and several downstream projects have moved. Again, from what I hear, ASF INFRA folks make the migration process fairly easy. What do you all think? Thanks Karthik
Re: [DISCUSS] Migrate from svn to git for source control?
+1, we did it for Oozie a while back and was painless with minor issues in Jenkins jobs Rebasing feature branches on latest trunk may be tricky as that may require a force push and if I'm not mistaken force pushes are disabled in Apache GIT. thx On Fri, Aug 1, 2014 at 4:43 PM, Karthik Kambatla ka...@cloudera.com wrote: Hi folks, From what I hear, a lot of devs use the git mirror for development/reviews and use subversion primarily for checking code in. I was wondering if it would make more sense just to move to git. In addition to subjective liking of git, I see the following advantages in our workflow: 1. Feature branches - it becomes easier to work on them and keep rebasing against the latest trunk. 2. Cherry-picks between branches automatically ensures the exact same commit message and tracks the lineage as well. 3. When cutting new branches and/or updating maven versions etc., it allows doing all the work locally before pushing it to the main branch. 4. Opens us up to potentially using other code-review tools. (Gerrit?) 5. It is just more convenient. I am sure this was brought up before in different capacities. I believe the support for git in ASF is healthy now and several downstream projects have moved. Again, from what I hear, ASF INFRA folks make the migration process fairly easy. What do you all think? Thanks Karthik