Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/415/ [Mar 22, 2018 4:21:52 PM] (inigoiri) HDFS-13318. RBF: Fix FindBugs in hadoop-hdfs-rbf. Contributed by Ekanth [Mar 22, 2018 5:21:10 PM] (kihwal) HDFS-13195. DataNode conf page cannot display the current value after [Mar 22, 2018 5:52:02 PM] (arp) HADOOP-15334. Upgrade Maven surefire plugin. Contributed by Arpit [Mar 22, 2018 6:04:37 PM] (yufei) HADOOP-15331. Fix a race condition causing parsing error of [Mar 22, 2018 6:29:31 PM] (weichiu) HDFS-11900. Hedged reads thread pool creation not synchronized. [Mar 22, 2018 8:32:57 PM] (inigoiri) HDFS-12792. RBF: Test Router-based federation using HDFSContract. [Mar 22, 2018 9:09:06 PM] (jitendra) HADOOP-14067. VersionInfo should load version-info.properties from its [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8723. Integrate the build infrastructure with hdfs-client. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8724. Import third_party libraries into the repository. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8725. Use std::chrono to implement the timer in the asio library. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8737. Initial implementation of a Hadoop RPC v9 client. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8745. Use Doxygen to generate documents for libhdfspp. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8758. Implement the continuation library in libhdfspp. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8759. Implement remote block reader in libhdfspp. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8764. Generate Hadoop RPC stubs from protobuf definitions. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8788. Implement unit tests for remote block reader in libhdfspp. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8775. SASL support for data transfer protocol in libhdfspp. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8774. Implement FileSystem and InputStream API for libhdfspp. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8952. InputStream.PositionRead() should be aware of available DNs. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9025. Fix compilation issues on arch linux. Contributed by Owen [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9093. Initialize protobuf fields in RemoteBlockReaderTest. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9116. Suppress false positives from Valgrind on uninitialized [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9108. InputStreamImpl::ReadBlockContinuation stores wrong pointers [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9095. RPC client should fail gracefully when the connection is [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9207. Move the implementation to the hdfs-native-client module. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9265. InputStreamImpl should hold a shared_ptr of the BlockReader. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9288. Import RapidXML 1.13 for libhdfspp. Contributed by Bob [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8766. Implement a libhdfs(3) compatible API. Contributed by James [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9340. libhdfspp fails to compile after HDFS-9207. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9320. libhdfspp should use sizeof(int32_t) instead of sizeof(int) [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9328. Formalize coding standards for libhdfs++. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9419. Import the optional library into libhdfs++. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9408. Build both static and dynamic libraries for libhdfspp. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9103. Retry reads on DN failure. Contributed by James Clampffer. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9368. Implement reads with implicit offset state in libhdfs++. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9359. Test libhdfs++ with existing libhdfs tests. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9117. Config file reader / options classes for libhdfs++. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9452. libhdfs++ Fix memory stomp in OpenFileForRead. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9448. Enable valgrind for libhdfspp unit tests. Contributed by [Mar 22, 2018 9:19:46 PM] (james.clampffer) Revert HDFS-9448. [Mar 22, 2018 9:19:46 PM] (james.clampffer) HDFS-9144. Refactoring libhdfs++ into stateful/ephemeral objects. [Mar 22, 2018 9:19:46 PM] (james.clampffer) HDFS-9497. move lib/proto/cpp_helpers to third-party since it won't [Mar 22, 2018 9:19:46 PM] (james.clampffer) HDFS-9504. Initialize BadNodeTracker in FileSystemImpl constructor. [Mar 22, 2018 9:19:46 PM] (james.clampffer) HDFS-9228. libhdfs++ should respect NN retry configuration settings. [Mar 22, 2018 9:19:46 PM]
Re: [VOTE] Adopt HDSL as a new Hadoop subproject
+1 (binding) Happy to see the community converge on a proposal. On Fri, Mar 23, 2018 at 11:18 AM, Andrew Wangwrote: > +1 > > If this VOTE is to gather consensus about establishing a new subproject, > let's definitely proceed with that. > > It sounds like we're already discussing changes to the details of how the > project will be run, and releasing from the branch vs. maven profile is not > a blocker for me. I raised it since I thought it would reduce the amount of > additional infra/build work, but it's fine if the preference is to just do > the work. Sorry if my earlier reply sounded like bikeshedding. > > Cheers, > Andrew > > On Fri, Mar 23, 2018 at 10:00 AM, Brahma Reddy Battula > wrote: > > > +1 ( binding) > > > > > > > > On Tue, Mar 20, 2018 at 11:50 PM, Owen O'Malley > > wrote: > > > > > All, > > > > > > Following our discussions on the previous thread (Merging branch > > HDFS-7240 > > > to trunk), I'd like to propose the following: > > > > > > * HDSL become a subproject of Hadoop. > > > * HDSL will release separately from Hadoop. Hadoop releases will not > > > contain HDSL and vice versa. > > > * HDSL will get its own jira instance so that the release tags stay > > > separate. > > > * On trunk (as opposed to release branches) HDSL will be a separate > > module > > > in Hadoop's source tree. This will enable the HDSL to work on their > trunk > > > and the Hadoop trunk without making releases for every change. > > > * Hadoop's trunk will only build HDSL if a non-default profile is > > enabled. > > > * When Hadoop creates a release branch, the RM will delete the HDSL > > module > > > from the branch. > > > * HDSL will have their own Yetus checks and won't cause failures in the > > > Hadoop patch check. > > > > > > I think this accomplishes most of the goals of encouraging HDSL > > development > > > while minimizing the potential for disruption of HDFS development. > > > > > > The vote will run the standard 7 days and requires a lazy 2/3 vote. PMC > > > votes are binding, but everyone is encouraged to vote. > > > > > > +1 (binding) > > > > > > .. Owen > > > > > > > > > > > -- > > > > > > > > --Brahma Reddy Battula > > > -- A very happy Hadoop contributor
Re: [VOTE] Adopt HDSL as a new Hadoop subproject
+1 If this VOTE is to gather consensus about establishing a new subproject, let's definitely proceed with that. It sounds like we're already discussing changes to the details of how the project will be run, and releasing from the branch vs. maven profile is not a blocker for me. I raised it since I thought it would reduce the amount of additional infra/build work, but it's fine if the preference is to just do the work. Sorry if my earlier reply sounded like bikeshedding. Cheers, Andrew On Fri, Mar 23, 2018 at 10:00 AM, Brahma Reddy Battulawrote: > +1 ( binding) > > > > On Tue, Mar 20, 2018 at 11:50 PM, Owen O'Malley > wrote: > > > All, > > > > Following our discussions on the previous thread (Merging branch > HDFS-7240 > > to trunk), I'd like to propose the following: > > > > * HDSL become a subproject of Hadoop. > > * HDSL will release separately from Hadoop. Hadoop releases will not > > contain HDSL and vice versa. > > * HDSL will get its own jira instance so that the release tags stay > > separate. > > * On trunk (as opposed to release branches) HDSL will be a separate > module > > in Hadoop's source tree. This will enable the HDSL to work on their trunk > > and the Hadoop trunk without making releases for every change. > > * Hadoop's trunk will only build HDSL if a non-default profile is > enabled. > > * When Hadoop creates a release branch, the RM will delete the HDSL > module > > from the branch. > > * HDSL will have their own Yetus checks and won't cause failures in the > > Hadoop patch check. > > > > I think this accomplishes most of the goals of encouraging HDSL > development > > while minimizing the potential for disruption of HDFS development. > > > > The vote will run the standard 7 days and requires a lazy 2/3 vote. PMC > > votes are binding, but everyone is encouraged to vote. > > > > +1 (binding) > > > > .. Owen > > > > > > -- > > > > --Brahma Reddy Battula >
Re: [VOTE] Release Apache Hadoop 3.0.1 (RC1)
Hi, All Thanks everyone for voting! The vote passes successfully with 6 binding +1s, 7 non-binding +1s and no -1s. I will work on the staging and releases. Best, On Fri, Mar 23, 2018 at 5:10 AM, Kuhu Shuklawrote: > +1 (non-binding) > > Built from source. > Installed on a pseudo distributed cluster. > Ran word count job and basic hdfs commands. > > Thank you for the effort on this release. > > Regards, > Kuhu > > On Thu, Mar 22, 2018 at 5:25 PM, Elek, Marton wrote: > >> >> +1 (non binding) >> >> I did a full build from source code, created a docker container and did >> various basic level tests with robotframework based automation and >> docker-compose based pseudo clusters[1]. >> >> Including: >> >> * Hdfs federation smoke test >> * Basic ViewFS configuration >> * Yarn example jobs >> * Spark example jobs (with and without yarn) >> * Simple hive table creation >> >> Marton >> >> >> [1]: https://github.com/flokkr/runtime-compose >> >> On 03/18/2018 05:11 AM, Lei Xu wrote: >> >>> Hi, all >>> >>> I've created release candidate RC-1 for Apache Hadoop 3.0.1 >>> >>> Apache Hadoop 3.0.1 will be the first bug fix release for Apache >>> Hadoop 3.0 release. It includes 49 bug fixes and security fixes, which >>> include 12 >>> blockers and 17 are critical. >>> >>> Please note: >>> * HDFS-12990. Change default NameNode RPC port back to 8020. It makes >>> incompatible changes to Hadoop 3.0.0. After 3.0.1 releases, Apache >>> Hadoop 3.0.0 will be deprecated due to this change. >>> >>> The release page is: >>> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0+Release >>> >>> New RC is available at: http://home.apache.org/~lei/hadoop-3.0.1-RC1/ >>> >>> The git tag is release-3.0.1-RC1, and the latest commit is >>> 496dc57cc2e4f4da117f7a8e3840aaeac0c1d2d0 >>> >>> The maven artifacts are available at: >>> https://repository.apache.org/content/repositories/orgapachehadoop-1081/ >>> >>> Please try the release and vote; the vote will run for the usual 5 >>> days, ending on 3/22/2017 6pm PST time. >>> >>> Thanks! >>> >>> - >>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org >>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org >>> >>> >> - >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org >> >> -- Lei (Eddy) Xu Software Engineer, Cloudera - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13344) adl.AdlFilesystem.close() doesn't release locks on open files
Jay Hankinson created HDFS-13344: Summary: adl.AdlFilesystem.close() doesn't release locks on open files Key: HDFS-13344 URL: https://issues.apache.org/jira/browse/HDFS-13344 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 2.7.3 Environment: HDInsight on MS Azure: Hadoop 2.7.3.2.6.2.25-1 Subversion g...@github.com:hortonworks/hadoop.git -r 1ceeb58bb3bb5904df0cbb7983389bcaf2ffd0b6 Compiled by jenkins on 2017-11-29T15:28Z Compiled with protoc 2.5.0 >From source with checksum 90b73c4c185645c1f47b61f942230 This command was run using /usr/hdp/2.6.2.25-1/hadoop/hadoop-common-2.7.3.2.6.2.25-1.jar Reporter: Jay Hankinson If you write to a file on and Azure ADL filesystem and close the file system but not the file before the process exits, the next time you try open the file for append it fails with: Exception in thread "main" java.io.IOException: APPEND failed with error 0x83090a16 (Failed to perform the requested operation because the file is currently open in write mode by another user or process.). [a67c6b32-e78b-4852-9fac-142a3e2ba963][2018-03-22T20:54:08.3520940-07:00] The following moves local file to HDFS if it doesn't exist or appends it's contents if it does: public void addFile(String source, String dest, Configuration conf) throws IOException \{ FileSystem fileSystem = FileSystem.get(conf); // Get the filename out of the file path String filename = source.substring(source.lastIndexOf('/') + 1,source.length()); // Create the destination path including the filename. if (dest.charAt(dest.length() - 1) != '/') { dest = dest + "/" + filename; } else \{ dest = dest + filename; } // Check if the file already exists Path path = new Path(dest); FSDataOutputStream out; if (fileSystem.exists(path)) \{ System.out.println("File " + dest + " already exists appending"); out = fileSystem.append(path); } else \{ out = fileSystem.create(path); } // Create a new file and write data to it. InputStream in = new BufferedInputStream(new FileInputStream(new File( source))); byte[] b = new byte[1024]; int numBytes = 0; while ((numBytes = in.read(b)) > 0) \{ out.write(b, 0, numBytes); } // Close the file system not the file in.close(); //out.close(); fileSystem.close(); } If "dest" is an adl:// location, invoking the function a second time (after the process has exited) it raises the error. If it's a regular hdfs:// file system, it doesn't as all the locks are released. The same exception is also raised if a subsequent append is done using: hdfs dfs -appendToFile. As I can't see a way to force lease recovery in this situation, this seems like a bug. org.apache.hadoop.fs.adl.AdlFileSystem inherits close() from org.apache.hadoop.fs.FileSystem https://hadoop.apache.org/docs/r3.0.0/api/org/apache/hadoop/fs/adl/AdlFileSystem.html Which states: Close this FileSystem instance. Will release any held locks. This does not seem to be the case -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Adopt HDSL as a new Hadoop subproject
+1 ( binding) On Tue, Mar 20, 2018 at 11:50 PM, Owen O'Malleywrote: > All, > > Following our discussions on the previous thread (Merging branch HDFS-7240 > to trunk), I'd like to propose the following: > > * HDSL become a subproject of Hadoop. > * HDSL will release separately from Hadoop. Hadoop releases will not > contain HDSL and vice versa. > * HDSL will get its own jira instance so that the release tags stay > separate. > * On trunk (as opposed to release branches) HDSL will be a separate module > in Hadoop's source tree. This will enable the HDSL to work on their trunk > and the Hadoop trunk without making releases for every change. > * Hadoop's trunk will only build HDSL if a non-default profile is enabled. > * When Hadoop creates a release branch, the RM will delete the HDSL module > from the branch. > * HDSL will have their own Yetus checks and won't cause failures in the > Hadoop patch check. > > I think this accomplishes most of the goals of encouraging HDSL development > while minimizing the potential for disruption of HDFS development. > > The vote will run the standard 7 days and requires a lazy 2/3 vote. PMC > votes are binding, but everyone is encouraged to vote. > > +1 (binding) > > .. Owen > -- --Brahma Reddy Battula
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/729/ [Mar 22, 2018 4:21:52 PM] (inigoiri) HDFS-13318. RBF: Fix FindBugs in hadoop-hdfs-rbf. Contributed by Ekanth [Mar 22, 2018 5:21:10 PM] (kihwal) HDFS-13195. DataNode conf page cannot display the current value after [Mar 22, 2018 5:52:02 PM] (arp) HADOOP-15334. Upgrade Maven surefire plugin. Contributed by Arpit [Mar 22, 2018 6:04:37 PM] (yufei) HADOOP-15331. Fix a race condition causing parsing error of [Mar 22, 2018 6:29:31 PM] (weichiu) HDFS-11900. Hedged reads thread pool creation not synchronized. [Mar 22, 2018 8:32:57 PM] (inigoiri) HDFS-12792. RBF: Test Router-based federation using HDFSContract. [Mar 22, 2018 9:09:06 PM] (jitendra) HADOOP-14067. VersionInfo should load version-info.properties from its [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8723. Integrate the build infrastructure with hdfs-client. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8724. Import third_party libraries into the repository. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8725. Use std::chrono to implement the timer in the asio library. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8737. Initial implementation of a Hadoop RPC v9 client. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8745. Use Doxygen to generate documents for libhdfspp. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8758. Implement the continuation library in libhdfspp. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8759. Implement remote block reader in libhdfspp. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8764. Generate Hadoop RPC stubs from protobuf definitions. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8788. Implement unit tests for remote block reader in libhdfspp. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8775. SASL support for data transfer protocol in libhdfspp. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8774. Implement FileSystem and InputStream API for libhdfspp. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8952. InputStream.PositionRead() should be aware of available DNs. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9025. Fix compilation issues on arch linux. Contributed by Owen [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9093. Initialize protobuf fields in RemoteBlockReaderTest. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9116. Suppress false positives from Valgrind on uninitialized [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9108. InputStreamImpl::ReadBlockContinuation stores wrong pointers [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9095. RPC client should fail gracefully when the connection is [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9207. Move the implementation to the hdfs-native-client module. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9265. InputStreamImpl should hold a shared_ptr of the BlockReader. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9288. Import RapidXML 1.13 for libhdfspp. Contributed by Bob [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-8766. Implement a libhdfs(3) compatible API. Contributed by James [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9340. libhdfspp fails to compile after HDFS-9207. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9320. libhdfspp should use sizeof(int32_t) instead of sizeof(int) [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9328. Formalize coding standards for libhdfs++. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9419. Import the optional library into libhdfs++. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9408. Build both static and dynamic libraries for libhdfspp. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9103. Retry reads on DN failure. Contributed by James Clampffer. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9368. Implement reads with implicit offset state in libhdfs++. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9359. Test libhdfs++ with existing libhdfs tests. Contributed by [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9117. Config file reader / options classes for libhdfs++. [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9452. libhdfs++ Fix memory stomp in OpenFileForRead. Contributed [Mar 22, 2018 9:19:45 PM] (james.clampffer) HDFS-9448. Enable valgrind for libhdfspp unit tests. Contributed by [Mar 22, 2018 9:19:46 PM] (james.clampffer) Revert HDFS-9448. [Mar 22, 2018 9:19:46 PM] (james.clampffer) HDFS-9144. Refactoring libhdfs++ into stateful/ephemeral objects. [Mar 22, 2018 9:19:46 PM] (james.clampffer) HDFS-9497. move lib/proto/cpp_helpers to third-party since it won't [Mar 22, 2018 9:19:46 PM] (james.clampffer) HDFS-9504. Initialize BadNodeTracker in FileSystemImpl constructor. [Mar 22, 2018 9:19:46 PM] (james.clampffer) HDFS-9228. libhdfs++ should respect NN retry configuration settings. [Mar 22,
[jira] [Created] (HDFS-13343) Ozone: Provide docker based acceptance testing on pseudo cluster
Elek, Marton created HDFS-13343: --- Summary: Ozone: Provide docker based acceptance testing on pseudo cluster Key: HDFS-13343 URL: https://issues.apache.org/jira/browse/HDFS-13343 Project: Hadoop HDFS Issue Type: Sub-task Components: HDFS-7240 Affects Versions: HDFS-7240 Reporter: Elek, Marton Assignee: Elek, Marton As a complement of exisiting MiniOzoneCluster based intergration test we need to test somehow the final artifacts. I propose to create an additional maven project which could contain simple test scenarios to start/stop cluster, use cli, etc. This could be done with the declarative approach of robot framework. It could be integrated to maven and could start and stop docker based pesudo clusters similar to the existing dev docker-compose approach. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13342) Ozone: Fix the class names in Ozone Script
Shashikant Banerjee created HDFS-13342: -- Summary: Ozone: Fix the class names in Ozone Script Key: HDFS-13342 URL: https://issues.apache.org/jira/browse/HDFS-13342 Project: Hadoop HDFS Issue Type: Bug Components: HDFS-7240 Affects Versions: HDFS-7240 Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee The Ozone (oz script) has wrong classnames for freon etc, As a result of which freon cannot be started from command line. This Jira proposes to fix all these. The oz script will be renamed to Ozone as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.0.1 (RC1)
+1 (non-binding) Built from source. Installed on a pseudo distributed cluster. Ran word count job and basic hdfs commands. Thank you for the effort on this release. Regards, Kuhu On Thu, Mar 22, 2018 at 5:25 PM, Elek, Martonwrote: > > +1 (non binding) > > I did a full build from source code, created a docker container and did > various basic level tests with robotframework based automation and > docker-compose based pseudo clusters[1]. > > Including: > > * Hdfs federation smoke test > * Basic ViewFS configuration > * Yarn example jobs > * Spark example jobs (with and without yarn) > * Simple hive table creation > > Marton > > > [1]: https://github.com/flokkr/runtime-compose > > On 03/18/2018 05:11 AM, Lei Xu wrote: > >> Hi, all >> >> I've created release candidate RC-1 for Apache Hadoop 3.0.1 >> >> Apache Hadoop 3.0.1 will be the first bug fix release for Apache >> Hadoop 3.0 release. It includes 49 bug fixes and security fixes, which >> include 12 >> blockers and 17 are critical. >> >> Please note: >> * HDFS-12990. Change default NameNode RPC port back to 8020. It makes >> incompatible changes to Hadoop 3.0.0. After 3.0.1 releases, Apache >> Hadoop 3.0.0 will be deprecated due to this change. >> >> The release page is: >> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0+Release >> >> New RC is available at: http://home.apache.org/~lei/hadoop-3.0.1-RC1/ >> >> The git tag is release-3.0.1-RC1, and the latest commit is >> 496dc57cc2e4f4da117f7a8e3840aaeac0c1d2d0 >> >> The maven artifacts are available at: >> https://repository.apache.org/content/repositories/orgapachehadoop-1081/ >> >> Please try the release and vote; the vote will run for the usual 5 >> days, ending on 3/22/2017 6pm PST time. >> >> Thanks! >> >> - >> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org >> >> > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
[jira] [Created] (HDFS-13341) Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework
Elek, Marton created HDFS-13341: --- Summary: Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework Key: HDFS-13341 URL: https://issues.apache.org/jira/browse/HDFS-13341 Project: Hadoop HDFS Issue Type: Sub-task Components: HDFS-7240 Affects Versions: HDFS-7240 Reporter: Elek, Marton Assignee: Elek, Marton ServiceRuntimeInfo is a generic interface to provide common information via JMX beans (such as build version, compile info, started time). Currently it is used only by KSM/SCM, I suggest to move it to the hadoop-hdsl/framework project from hadoop-commons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13340) Ozone: Fix false positive RAT warning when project built without hds/cblock
Elek, Marton created HDFS-13340: --- Summary: Ozone: Fix false positive RAT warning when project built without hds/cblock Key: HDFS-13340 URL: https://issues.apache.org/jira/browse/HDFS-13340 Project: Hadoop HDFS Issue Type: Sub-task Components: HDFS-7240 Affects Versions: HDFS-7240 Reporter: Elek, Marton Assignee: Elek, Marton Attachments: HDFS-13340-HDFS-7240.001.patch First of all: All the licence headers are handled well on this branch. Unfortunatelly maven don't know it. If the project is built *without* -P hdsl. The rat exclude rules in the hdsl/cblock/ozone projects are not used as these projects are not used as maven project they are handled as static files. The solutions is: 1. Instead proper exclude I added the licence headers to some test file 2. I added an additional exclude to the root pom.xml -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13339) TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart blocks on waitReplication
liaoyuxiangqin created HDFS-13339: - Summary: TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart blocks on waitReplication Key: HDFS-13339 URL: https://issues.apache.org/jira/browse/HDFS-13339 Project: Hadoop HDFS Issue Type: Bug Components: datanode Environment: os: Linux 2.6.32-358.el6.x86_64 hadoop version: hadoop-3.2.0-SNAPSHOT unit: mvn test -Pnative -Dtest=TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart Reporter: liaoyuxiangqin When i execute Unit Test of TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart, the process blocks on waitReplication, detail information as follows: [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 307.492 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting [ERROR] testVolFailureStatsPreservedOnNNRestart(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting) Time elapsed: 307.206 s <<< ERROR! java.util.concurrent.TimeoutException: Timed out waiting for /test1 to reach 2 replicas at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:800) at org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testVolFailureStatsPreservedOnNNRestart(TestDataNodeVolumeFailureReporting.java:283) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org