Re: Looking to a Hadoop 3 release
If classloader isolation is in place, then dependency versions can freely be upgraded as won't pollute apps space (things get trickier if there is an ON/OFF switch). On Thu, Mar 5, 2015 at 9:21 PM, Allen Wittenauer wrote: > > Is there going to be a general upgrade of dependencies? I'm thinking of > jetty & jackson in particular. > > On Mar 5, 2015, at 5:24 PM, Andrew Wang wrote: > > > I've taken the liberty of adding a Hadoop 3 section to the Roadmap wiki > > page. In addition to the two things I've been pushing, I also looked > > through Allen's list (thanks Allen for making this) and picked out the > > shell script rewrite and the removal of HFTP as big changes. This would > be > > the place to propose features for inclusion in 3.x, I'd particularly > > appreciate help on the YARN/MR side. > > > > Based on what I'm hearing, let me modulate my proposal to the following: > > > > - We avoid cutting branch-3, and release off of trunk. The trunk-only > > changes don't look that scary, so I think this is fine. This does mean we > > need to be more rigorous before merging branches to trunk. I think > > Vinod/Giri's work on getting test-patch.sh runs on non-trunk branches > would > > be very helpful in this regard. > > - We do not include anything to break wire compatibility unless (as Jason > > says) it's an unbelievably awesome feature. > > - No harm in rolling alphas from trunk, as it doesn't lock us to anything > > compatibility wise. Downstreams like releases. > > > > I'll take Steve's advice about not locking GA to a given date, but I also > > share his belief that we can alpha/beta/GA faster than it took for Hadoop > > 2. Let's roll some intermediate releases, work on the roadmap items, and > > see how we're feeling in a few months. > > > > Best, > > Andrew > > > > On Thu, Mar 5, 2015 at 3:21 PM, Siddharth Seth wrote: > > > >> I think it'll be useful to have a discussion about what else people > would > >> like to see in Hadoop 3.x - especially if the change is potentially > >> incompatible. Also, what we expect the release schedule to be for major > >> releases and what triggers them - JVM version, major features, the need > for > >> incompatible changes ? Assuming major versions will not be released > every 6 > >> months/1 year (adoption time, fairly disruptive for downstream projects, > >> and users) - considering additional features/incompatible changes for > 3.x > >> would be useful. > >> > >> Some features that come to mind immediately would be > >> 1) enhancements to the RPC mechanics - specifically support for AsynRPC > / > >> two way communication. There's a lot of places where we re-use > heartbeats > >> to send more information than what would be done if the PRC layer > supported > >> these features. Some of this can be done in a compatible manner to the > >> existing RPC sub-system. Others like 2 way communication probably > cannot. > >> After this, having HDFS/YARN actually make use of these changes. The > other > >> consideration is adoption of an alternate system ike gRpc which would be > >> incompatible. > >> 2) Simplification of configs - potentially separating client side > configs > >> and those used by daemons. This is another source of perpetual confusion > >> for users. > >> > >> Thanks > >> - Sid > >> > >> > >> On Thu, Mar 5, 2015 at 2:46 PM, Steve Loughran > >> wrote: > >> > >>> Sorry, outlook dequoted Alejandros's comments. > >>> > >>> Let me try again with his comments in italic and proofreading of mine > >>> > >>> On 05/03/2015 13:59, "Steve Loughran" >>> ste...@hortonworks.com>> wrote: > >>> > >>> > >>> > >>> On 05/03/2015 13:05, "Alejandro Abdelnur" >>> tuc...@gmail.com><mailto:tuc...@gmail.com>> wrote: > >>> > >>> IMO, if part of the community wants to take on the responsibility and > >> work > >>> that takes to do a new major release, we should not discourage them > from > >>> doing that. > >>> > >>> Having multiple major branches active is a standard practice. > >>> > >>> Looking @ 2.x, the major work (HDFS HA, YARN) meant that it did take a > >>> long time to get out, and during that time 0.21, 0.22, got
Re: Looking to a Hadoop 3 release
IMO, if part of the community wants to take on the responsibility and work that takes to do a new major release, we should not discourage them from doing that. Having multiple major branches active is a standard practice. This time around we are not replacing the guts as we did from Hadoop 1 to Hadoop 2, but superficial surgery to address issues were not considered (or was too much to take on top of the guts transplant). For the split brain concern, we did a great of job maintaining Hadoop 1 and Hadoop 2 until Hadoop 1 faded away. Based on that experience I would say that the coexistence of Hadoop 2 and Hadoop 3 will be much less demanding/traumatic. Also, to facilitate the coexistence we should limit Java language features to Java 7 (even if the runtime is Java 8), once Java 7 is not used anymore we can remove this limitation. Thanks. On Thu, Mar 5, 2015 at 11:40 AM, Vinod Kumar Vavilapalli < vino...@hortonworks.com> wrote: > The 'resistance' is not so much about a new major release, more so about > the content and the roadmap of the release. Other than the two specific > features raised (the need for breaking compat for them is something that I > am debating), I haven't seen a roadmap of branch-3 about any more features > that this community needs to discuss about. If all the difference between > branch-2 and branch-3 is going to be JDK + a couple of incompat changes, it > is a big problem in two dimensions (1) it's a burden keeping the branches > in sync and avoiding the split-brain we experienced with 1.x, 2.x or worse > branch-0.23, branch-2 and (2) very hard to ask people to not break more > things in branch-3. > > We seem to have agreed upon a course of action for JDK7. And now we are > taking a different direction for JDK8. Going by this new proposal, come > 2016, we will have to deal with JDK9 and 3 mainline incompatible hadoop > releases. > > Regarding, individual improvements like classpath isolation, shell script > stuff, Jason Lowe captured it perfectly on HADOOP-11656 - it should be > possible for every major feature that we develop to be a opt in, unless the > change is so great and users can balance out the incompatibilities for the > new stuff they are getting. Even with an ground breaking change like with > YARN, we spent a bit of time to ensure compatibility (MAPREDUCE-5108) that > has paid so many times over in return. Breaking compatibility shouldn't > come across as too cheap a thing. > > Thanks, > +Vinod > > On Mar 4, 2015, at 10:15 AM, Andrew Wang andrew.w...@cloudera.com>> wrote: > > Where does this resistance to a new major release stem from? As I've > described from the beginning, this will look basically like a 2.x release, > except for the inclusion of classpath isolation by default and target > version JDK8. I've expressed my desire to maintain API and wire > compatibility, and we can audit the set of incompatible changes in trunk to > ensure this. My proposal for doing alpha and beta releases leading up to GA > also gives downstreams a nice amount of time for testing and validation. > >
Re: Guava
IMO we should: 1* have a clean and thin client API JAR (which does not drag any 3rd party dependencies, or a well defined small set -i.e. slf4j & log4j-) 2* have a client implementation that uses a classloader to isolate client impl 3rd party deps from app dependencies. #2 can be done using a stock URLClassLoader (i would just subclass it to forbid packages in the API JAR and exposed 3rd parties to be loaded from the app JAR) #1 is the tricky thing as our current API modules don't have a clean API/impl separation. thx PS: If folks are interested in pursing this, I can put together a prototype of how #2 would work (I don't think it will be more than 200 lines of code) On Mon, Nov 10, 2014 at 5:18 AM, Steve Loughran wrote: > Yes, Guava is a constant pain; there's lots of open JIRAs related to it, as > its the one we can't seamlessly upgrade. Not unless we do our own fork and > reinsert the missing classes. > > The most common uses in the code are > > @VisibleForTesting (easily replicated) > and the Precondition.check() operations > > The latter is also easily swapped out, and we could even add the check they > forgot: > Preconditions.checkArgNotNull(argname, arg) > > > These are easy; its the more complex data structures that matter more. > > I think for Hadoop 2.7 & java 7 we need to look at this problem and do > something. Even if we continue to ship Guava 11 so that the HBase team > don't send any (more) death threats, we can/should rework Hadoop to build > and run against Guava 16+ too. That's needed to fix some of the recent java > 7/8+ changes. > > -Everything in v11 dropped from v16 MUST to be implemented with our own > versions. > -anything tagged as deprecated in 11+ SHOULD be replaced by newer stuff, > wherever possible. > > I think for 2.7+ we should add some new profiles to the POM, for Java 8 and > 9 alongside the new baseline java 7. For those later versions we could > perhaps mandate Guava 16. > > > > On 10 November 2014 00:42, Arun C Murthy wrote: > > > … has been a constant pain w.r.t compatibility etc. > > > > Should we consider adopting a policy to not use guava in > Common/HDFS/YARN? > > > > MR doesn't matter too much since it's application-side issue, it does > hurt > > end-users though since they still might want a newer guava-version, but > at > > least they can modify MR. > > > > Thoughts? > > > > thanks, > > Arun > > > > > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entity > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the reader > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. >
[jira] [Created] (MAPREDUCE-6101) on job submission, if input or output directories are encrypted, shuffle data should be encrypted at rest
Alejandro Abdelnur created MAPREDUCE-6101: - Summary: on job submission, if input or output directories are encrypted, shuffle data should be encrypted at rest Key: MAPREDUCE-6101 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6101 Project: Hadoop Map/Reduce Issue Type: Improvement Components: job submission, security Affects Versions: 2.6.0 Reporter: Alejandro Abdelnur Assignee: Arun Suresh Currently setting shuffle data at rest encryption has to be done explicitly to work. If not set explicitly (ON or OFF) but the input or output HDFS directories of the job are in an encrption zone, we should set it to ON. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Oops!, pushed 2 commits to trunk by mistake, just pushed their revert
-- Alejandro
Re: [VOTE] Release Apache Hadoop 2.5.1 RC0
Thanks Karthik. +1. + verified MD5 for source tarball + verified signature for source tarball + successfully run apache-rat:check + checked CHANGES, LICENSE, README, NOTICE files. + built from source tarball + started pseudo cluster + run a couple of MR example jobs + basic test on HttpFS On Wed, Sep 10, 2014 at 10:10 AM, Karthik Kambatla wrote: > Thanks for reporting the mistake in the documentation, Akira. While it is > good to fix it, I am not sure it is big enough to warrant another RC, > particularly because 2.5.1 is very much 2.5.0 done right. > > I just updated the how-to-release wiki to capture this step in the release > process, so we don't miss it in the future. > > On Mon, Sep 8, 2014 at 11:37 PM, Akira AJISAKA > > wrote: > > > -0 (non-binding) > > > > In the document, "Apache Hadoop 2.5.1 is a minor release in the 2.x.y > > release line, buliding upon the previous stable release 2.4.1." > > > > Hadoop 2.5.1 is a point release. Filed HADOOP-11078 to track this. > > > > Regards, > > Akira > > > > > > (2014/09/09 0:51), Karthik Kambatla wrote: > > > >> +1 (non-binding) > >> > >> Built the source tarball, brought up a pseudo-distributed cluster and > ran > >> a > >> few MR jobs. Verified documentation and size of the binary tarball. > >> > >> On Fri, Sep 5, 2014 at 5:18 PM, Karthik Kambatla > >> wrote: > >> > >> Hi folks, > >>> > >>> I have put together a release candidate (RC0) for Hadoop 2.5.1. > >>> > >>> The RC is available at: http://people.apache.org/~ > >>> kasha/hadoop-2.5.1-RC0/ > >>> The RC git tag is release-2.5.1-RC0 > >>> The maven artifacts are staged at: > >>> > https://repository.apache.org/content/repositories/orgapachehadoop-1010/ > >>> > >>> You can find my public key at: > >>> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS > >>> > >>> Please try the release and vote. The vote will run for the now usual 5 > >>> days. > >>> > >>> Thanks > >>> Karthik > >>> > >>> > >> > > > -- Alejandro
[jira] [Created] (MAPREDUCE-6060) shuffle data should be encrypted at rest if the input/output of the job are in an encryption zone
Alejandro Abdelnur created MAPREDUCE-6060: - Summary: shuffle data should be encrypted at rest if the input/output of the job are in an encryption zone Key: MAPREDUCE-6060 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6060 Project: Hadoop Map/Reduce Issue Type: Improvement Components: security Affects Versions: 2.6.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur If the input or output of an MR job are within an encryption zone, by default the intermediate data of the job should be encrypted. Setting the {{MRJobConfig.MR_ENCRYPTED_INTERMEDIATE_DATA}} property explicitly should override the default behavior. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Updates on migration to git
I've just did some work on top of trunk and branch-2, all good. Thanks Karthik. On Tue, Aug 26, 2014 at 2:26 PM, Karthik Kambatla wrote: > I compared the new asf git repo against the svn and github repos (mirrored > from svn). Here is what I see: > - for i in *; do git diff $i ../hadoop-github/$i; done showed no > differences between the two. So, I think all the source is there. > - The branches match > - All svn tags exist in git, but git has a few more. These additional ones > are those that we "deleted" from svn. > - git rev-list --remotes | wc -l shows 27006 revisions in the new git repo > and 29549 revisions in the github repo. Checking with Daniel, he said the > git svn import works differently compared to the git mirroring. > > Are we comfortable with making the git repo writable under these > conditions? I ll let other people poke around and report. > > Thanks for your cooperation, > Karthik > > > On Tue, Aug 26, 2014 at 1:19 PM, Karthik Kambatla > wrote: > > > The git repository is now ready for inspection. I ll take a look shortly, > > but it would be great if a few others could too. > > > > Once we are okay with it, we can ask it to be writable. > > > > > > On Tuesday, August 26, 2014, Karthik Kambatla > wrote: > > > >> Hi Suresh > >> > >> There was one vote thread on whether to migrate to git, and the > >> implications to the commit process for individual patches and feature > >> branches - > >> https://www.mail-archive.com/common-dev@hadoop.apache.org/msg13447.html > >> . Prior to that, there was a discuss thread on the same topic. > >> > >> As INFRA handles the actual migration from subversion to git, the vote > >> didn't include those specifics. The migration is going on as we speak > (See > >> INFRA-8195). The initial expectation was that the migration would be > done > >> in a few hours, but it has been several hours and the last I heard the > >> import was still running. > >> > >> I have elaborated on the points in the vote thread and drafted up a wiki > >> page on how-to-commit - > https://wiki.apache.org/hadoop/HowToCommitWithGit > >> . We can work on improving this further and call a vote thread on those > >> items if need be. > >> > >> Thanks > >> Karthik > >> > >> > >> On Tue, Aug 26, 2014 at 11:41 AM, Suresh Srinivas < > sur...@hortonworks.com > >> > wrote: > >> > >>> Karthik, > >>> > >>> I would like to see detailed information on how this migration will be > >>> done, how it will affect the existing project and commit process. This > >>> should be done in a document that can be reviewed instead of in an > email > >>> thread on an ad-hoc basis. Was there any voting on this in PMC and > should > >>> we have a vote to ensure everyone is one the same page on doing this > and > >>> how to go about it? > >>> > >>> Regards, > >>> Suresh > >>> > >>> > >>> On Tue, Aug 26, 2014 at 9:17 AM, Karthik Kambatla > >>> wrote: > >>> > >>> > Last I heard, the import is still going on and appears closer to > >>> getting > >>> > done. Thanks for your patience with the migration. > >>> > > >>> > I ll update you as and when there is something. Eventually, the git > >>> repo > >>> > should be at the location in the wiki. > >>> > > >>> > > >>> > On Mon, Aug 25, 2014 at 3:45 PM, Karthik Kambatla < > ka...@cloudera.com> > >>> > wrote: > >>> > > >>> > > Thanks for bringing these points up, Zhijie. > >>> > > > >>> > > By the way, a revised How-to-commit wiki is at: > >>> > > https://wiki.apache.org/hadoop/HowToCommitWithGit . Please feel > >>> free to > >>> > > make changes and improve it. > >>> > > > >>> > > On Mon, Aug 25, 2014 at 11:00 AM, Zhijie Shen < > zs...@hortonworks.com > >>> > > >>> > > wrote: > >>> > > > >>> > >> Do we have any convention about "user.name" and "user.email"? For > >>> > >> example, > >>> > >> we'd like to use @apache.org for the email. > >>> > >> > >>> > > > >>> > > May be, we can ask people to use project-specific configs here and > >>> use > >>> > > their real name and @apache.org address. > >>> > > > >>> > > Is there any downside to letting people use their global values for > >>> these > >>> > > configs? > >>> > > > >>> > > > >>> > > > >>> > >> > >>> > >> Moreover, do we want to use "--author="Author Name < > >>> em...@address.com>" > >>> > >> when committing on behalf of a particular contributor? > >>> > >> > >>> > > > >>> > > Fetching the email-address is complicated here. Should we use the > >>> > > contributor's email from JIRA? What if that is not their @apache > >>> address? > >>> > > > >>> > > > >>> > >> > >>> > >> > >>> > >> On Mon, Aug 25, 2014 at 9:56 AM, Karthik Kambatla < > >>> ka...@cloudera.com> > >>> > >> wrote: > >>> > >> > >>> > >> > Thanks for your input, Steve. Sorry for sending the email out > that > >>> > >> late, I > >>> > >> > sent it as soon as I could. > >>> > >> > > >>> > >> > > >>> > >> > On Mon, Aug 25, 2014 at 2:20 AM, Steve Loughran < > >>> > ste...@hortonworks.com > >>> > >> > > >>> > >> > wrote: > >>> > >> > > >>> > >> > > just caught up
Re: Apache Hadoop 2.5.0 published tarballs are missing some txt files
Verified MD5s and signatures of both SRC & BIN tarballs. Thanks Karthik. On Mon, Aug 18, 2014 at 12:42 PM, Karthik Kambatla wrote: > Hi devs > > Tsuyoshi just brought it to my notice that the published tarballs don't > have LICENSE, NOTICE and README at the top-level. Instead, they are only > under common, hdfs, etc. > > Now that we have already announced the release and the jars/functionality > doesn't change, I propose we just update the tarballs with ones that > includes those files? I just untar-ed the published tarballs and copied > LICENSE, NOTICE and README from under common to the top directory and > tar-ed them back again. > > The updated tarballs are at: http://people.apache.org/~kasha/hadoop-2.5.0/ > . Can someone please verify the signatures? > > If you would prefer an alternate action, please suggest. > > Thanks > Karthik > > PS: HADOOP-10956 should include the fix for these files also. >
Re: [VOTE] Migration from subversion to git for version control
+1 Alejandro (phone typing) > On Aug 8, 2014, at 19:57, Karthik Kambatla wrote: > > I have put together this proposal based on recent discussion on this topic. > > Please vote on the proposal. The vote runs for 7 days. > > 1. Migrate from subversion to git for version control. > 2. Force-push to be disabled on trunk and branch-* branches. Applying > changes from any of trunk/branch-* to any of branch-* should be through > "git cherry-pick -x". > 3. Force-push on feature-branches is allowed. Before pulling in a > feature, the feature-branch should be rebased on latest trunk and the > changes applied to trunk through "git rebase --onto" or "git cherry-pick > ". > 4. Every time a feature branch is rebased on trunk, a tag that > identifies the state before the rebase needs to be created (e.g. > tag_feature_JIRA-2454_2014-08-07_rebase). These tags can be deleted once > the feature is pulled into trunk and the tags are no longer useful. > 5. The relevance/use of tags stay the same after the migration. > > Thanks > Karthik > > PS: Per Andrew Wang, this should be a "Adoption of New Codebase" kind of > vote and will be Lazy 2/3 majority of PMC members.
Re: [DISCUSS] Migrate from svn to git for source control?
funny, i'd treat it as a merge vote. On Fri, Aug 8, 2014 at 11:44 AM, Karthik Kambatla wrote: > Thanks Steve. Including that in the proposal. > > By the way, from our project bylaws (http://hadoop.apache.org/bylaws.html > ), > I can't tell what kind of a vote this would be. > > > On Thu, Aug 7, 2014 at 1:22 AM, Steve Loughran > wrote: > > > On 6 August 2014 22:16, Karthik Kambatla wrote: > > > > > 3. Force-push on feature-branches is allowed. Before pulling in a > > feature, > > > the feature-branch should be rebased on latest trunk and the changes > > > applied to trunk through "git rebase --onto" or "git cherry-pick > > > ". > > > > > > > I'd add to this process the requirement to tag any feature branch before > a > > rebase, with some standard naming like > > > > tag_feature_JIRA-2454_2014-08-07_rebase > > > > Why? it keeps the state of the branch before the rebase in case you ever > > want it back again. Without the tag: lost data. Once the feature is > merged > > in you can rm the tags, but until then they give you a log of what > changes > > went on, and make it possible to switch back to the pre-rebase version. > > > > Without those tags you do lose history of the development. > > > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entity > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the reader > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.5.0 RC2
+1 + verified MD5 for source tarball + verified signature for source tarball + successfully run apache-rat:check + checked CHANGES.txt files + built from source tarball + started pseudo cluster + run a couple of MR example jobs + basic test on HttpFS On Wed, Aug 6, 2014 at 2:45 PM, Ted Yu wrote: > +1 (non-binding) > > I used RC2 to run Apache Slider unit tests - all of which passed. > > Cheers > > > On Wed, Aug 6, 2014 at 2:17 PM, Karthik Kambatla > wrote: > > > +1 (non-binding) > > > > Brought up a pseudo distributed cluster. Ran a few HDFS operations and a > > couple of example MR jobs. Checked metrics being written out through > > FileSink. > > > > > > On Wed, Aug 6, 2014 at 1:59 PM, Karthik Kambatla > > wrote: > > > > > Hi folks, > > > > > > I have put together a release candidate (rc2) for Hadoop 2.5.0. > > > > > > The RC is available at: > > http://people.apache.org/~kasha/hadoop-2.5.0-RC2/ > > > The RC tag in svn is here: > > > https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.5.0-rc2/ > > > The maven artifacts are staged at: > > > > https://repository.apache.org/content/repositories/orgapachehadoop-1009/ > > > > > > You can find my public key at: > > > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS > > > > > > Please try the release and vote. The vote will run for the now usual 5 > > > days. > > > > > > Thanks > > > > > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.5.0 RC1
+0, (see 2 '-' below). I think we should address those and cut a new RC. + verified MD5 for source tarball + verified signature for source tarball + successfully run apache-rat:check - CHANGES.txt have 'Release 2.5.0 - UNRELEASED' they should have 'Release 2.5.0 - ' - HDFS CHANGES.txt has HDFS-6752 entry outside of the 2.5.0 section + built from source tarball + started pseudo cluster + run a couple of MR example jobs + basic test on HttpFS thx On Wed, Aug 6, 2014 at 3:49 AM, Steve Loughran wrote: > +1 binding > > slider validation > > purge all 2.5.0 artifacts in the local mvn repo (fish shell): > > rm -rf ~/.m2/repository/org/apache/hadoop/**/*2.5.0* > > clean slider build -verified download of artifacts from staging repo > > run all the tests, especially the one that we'd turned off for 2.4.0: > TestKilledHBaseAM (SLIDER-34, YARN-2065) > > all happy. > > > Object store tests > > 1. check out branch-2.5 @commit # 0c0c513 > 2. put in hadoop-common/src/test/resources/contract-test-options.xml with > binding to s3/s3n buckets > 3. run the s3n Contract tests: > mvn test -Dtest=TestS3NContract\* > ...all passed > > 4. in hadoop-tools/hadoop-openstack -ran all the tests with auth-keys.xml > set to bind all non-contract tests to a public cloud & > contract-test-options.xml for the contract tests. All passed > > > > > On 6 August 2014 01:37, Karthik Kambatla wrote: > > > Hi folks, > > > > I have put together a release candidate (rc1) for Hadoop 2.5.0. > > > > The RC is available at: > http://people.apache.org/~kasha/hadoop-2.5.0-RC1/ > > The RC tag in svn is here: > > https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.5.0-rc1/ > > The maven artifacts are staged at: > > https://repository.apache.org/content/repositories/orgapachehadoop-1008/ > > > > You can find my public key at: > > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS > > > > Please try the release and vote. The vote will run for the now usual 5 > > days. > > > > Thanks > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: [DISCUSS] Migrate from svn to git for source control?
I would say we can first move to git and keep the very same workflow we have today, then we can evolve it. On Tue, Aug 5, 2014 at 6:46 PM, Arpit Agarwal wrote: > +1 to voting on specific workflow(s). > > > On Tue, Aug 5, 2014 at 5:49 PM, Karthik Kambatla > wrote: > > > If we are to start a vote thread, will people prefer a vote thread that > > includes potential workflows as well? > > > > > > On Tue, Aug 5, 2014 at 5:40 PM, Karthik Kambatla > > wrote: > > > > > Thanks for your opinions, everyone. Looks like most people are for the > > > change and no one is against it. Let me start a vote for this. > > > > > > > > > On Mon, Aug 4, 2014 at 4:44 PM, Tsuyoshi OZAWA < > ozawa.tsuyo...@gmail.com > > > > > > wrote: > > > > > >> Thank you for supplementation, Andrew. Yes, we should go step by step > > >> and let's discuss review workflows on a another thread. > > >> > > >> Thanks, > > >> - Tsuyoshi > > >> > > >> On Tue, Aug 5, 2014 at 8:23 AM, Andrew Wang > > > >> wrote: > > >> > I think we should take things one step at a time. Switching to git > > >> > definitely opens up the possibility for better review workflows, but > > we > > >> can > > >> > discuss that on a different thread. > > >> > > > >> > A few different people have also mentioned Gerrit, so that'd be in > the > > >> > running along with Github (and I guess ReviewBoard). > > >> > > > >> > Thanks, > > >> > Andrew > > >> > > > >> > > > >> > On Mon, Aug 4, 2014 at 4:17 PM, Tsuyoshi OZAWA < > > >> ozawa.tsuyo...@gmail.com> > > >> > wrote: > > >> > > > >> >> Thank you for great suggestion, Karthik. +1(non-binding) to use > git. > > >> >> I'm also using private git repository. > > >> >> Additionally, I have one question. Will we accept github-based > > >> >> development like Apache Spark? IHMO, it allow us to leverage Hadoop > > >> >> development, because the cost of sending pull request is very low > and > > >> >> its review board is great. One concern is that the development > > >> >> workflow can change and it can confuse us. What do you think? > > >> >> > > >> >> Thanks, > > >> >> - Tsuyoshi > > >> >> > > >> >> On Sat, Aug 2, 2014 at 8:43 AM, Karthik Kambatla < > ka...@cloudera.com > > > > > >> >> wrote: > > >> >> > Hi folks, > > >> >> > > > >> >> > From what I hear, a lot of devs use the git mirror for > > >> >> development/reviews > > >> >> > and use subversion primarily for checking code in. I was > wondering > > >> if it > > >> >> > would make more sense just to move to git. In addition to > > subjective > > >> >> liking > > >> >> > of git, I see the following advantages in our workflow: > > >> >> > > > >> >> >1. Feature branches - it becomes easier to work on them and > keep > > >> >> >rebasing against the latest trunk. > > >> >> >2. Cherry-picks between branches automatically ensures the > exact > > >> same > > >> >> >commit message and tracks the lineage as well. > > >> >> >3. When cutting new branches and/or updating maven versions > > etc., > > >> it > > >> >> >allows doing all the work locally before pushing it to the > main > > >> >> branch. > > >> >> >4. Opens us up to potentially using other code-review tools. > > >> (Gerrit?) > > >> >> >5. It is just more convenient. > > >> >> > > > >> >> > I am sure this was brought up before in different capacities. I > > >> believe > > >> >> the > > >> >> > support for git in ASF is healthy now and several downstream > > projects > > >> >> have > > >> >> > moved. Again, from what I hear, ASF INFRA folks make the > migration > > >> >> process > > >> >> > fairly easy. > > >> >> > > > >> >> > What do you all think? > > >> >> > > > >> >> > Thanks > > >> >> > Karthik > > >> >> > > >> >> > > >> >> > > >> >> -- > > >> >> - Tsuyoshi > > >> >> > > >> > > >> > > >> > > >> -- > > >> - Tsuyoshi > > >> > > > > > > > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. >
Re: [DISCUSS] Migrate from svn to git for source control?
+1, we did it for Oozie a while back and was painless with minor issues in Jenkins jobs Rebasing feature branches on latest trunk may be tricky as that may require a force push and if I'm not mistaken force pushes are disabled in Apache GIT. thx On Fri, Aug 1, 2014 at 4:43 PM, Karthik Kambatla wrote: > Hi folks, > > From what I hear, a lot of devs use the git mirror for development/reviews > and use subversion primarily for checking code in. I was wondering if it > would make more sense just to move to git. In addition to subjective liking > of git, I see the following advantages in our workflow: > >1. Feature branches - it becomes easier to work on them and keep >rebasing against the latest trunk. >2. Cherry-picks between branches automatically ensures the exact same >commit message and tracks the lineage as well. >3. When cutting new branches and/or updating maven versions etc., it >allows doing all the work locally before pushing it to the main branch. >4. Opens us up to potentially using other code-review tools. (Gerrit?) >5. It is just more convenient. > > I am sure this was brought up before in different capacities. I believe the > support for git in ASF is healthy now and several downstream projects have > moved. Again, from what I hear, ASF INFRA folks make the migration process > fairly easy. > > What do you all think? > > Thanks > Karthik >
[jira] [Resolved] (MAPREDUCE-5890) Support for encrypting Intermediate data and spills in local filesystem
[ https://issues.apache.org/jira/browse/MAPREDUCE-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved MAPREDUCE-5890. --- Resolution: Fixed Fix Version/s: fs-encryption Hadoop Flags: Reviewed I've just committed this JIRA to fs-encryption branch. [~chris.douglas], thanks for all the review cycles you spent on this. [~asuresh], thanks for persevering until done, nice job. > Support for encrypting Intermediate data and spills in local filesystem > --- > > Key: MAPREDUCE-5890 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5890 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: security >Affects Versions: 2.4.0 > Reporter: Alejandro Abdelnur >Assignee: Arun Suresh > Labels: encryption > Fix For: fs-encryption > > Attachments: MAPREDUCE-5890.10.patch, MAPREDUCE-5890.11.patch, > MAPREDUCE-5890.12.patch, MAPREDUCE-5890.13.patch, MAPREDUCE-5890.14.patch, > MAPREDUCE-5890.15.patch, MAPREDUCE-5890.3.patch, MAPREDUCE-5890.4.patch, > MAPREDUCE-5890.5.patch, MAPREDUCE-5890.6.patch, MAPREDUCE-5890.7.patch, > MAPREDUCE-5890.8.patch, MAPREDUCE-5890.9.patch, > org.apache.hadoop.mapred.TestMRIntermediateDataEncryption-output.txt, > syslog.tar.gz > > > For some sensitive data, encryption while in flight (network) is not > sufficient, it is required that while at rest it should be encrypted. > HADOOP-10150 & HDFS-6134 bring encryption at rest for data in filesystem > using Hadoop FileSystem API. MapReduce intermediate data and spills should > also be encrypted while at rest. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: [VOTE] Release Apache Hadoop 2.4.1
> What's in branch-2.4.1 doesn't currently match what's in this RC, but there is a tag that matches, right? Else we need to fix that. On Fri, Jun 27, 2014 at 3:26 PM, Aaron T. Myers wrote: > That's fine by me. Like I said, assuming that rc1 does indeed include the > fix in HDFS-6527, and not the revert, then rc1 should be functionally > correct. What's in branch-2.4.1 doesn't currently match what's in this RC, > but if that doesn't bother anyone else then I won't lose any sleep over it. > > -- > Aaron T. Myers > Software Engineer, Cloudera > > > On Jun 27, 2014, at 3:04 PM, "Arun C. Murthy" > wrote: > > > > Aaron, > > > > Since the amend was just to the test, I'll keep this RC as-is. > > > > I'll also comment on jira. > > > > thanks, > > Arun > > > > > > > >> On Jun 27, 2014, at 2:40 PM, "Aaron T. Myers" wrote: > >> > >> I'm -0 on rc1. > >> > >> Note the latest discussion on HDFS-6527 which first resulted in that > patch > >> being reverted from branch-2.4.1 because it was believed it wasn't > >> necessary, and then some more discussion which indicates that in fact > the > >> patch for HDFS-6527 should be included in 2.4.1, but with a slightly > >> different test case. > >> > >> I believe that rc1 was actually created after the first backport of > >> HDFS-6527, but before the revert, so rc1 should be functionally correct, > >> but the test case is not quite correct in rc1, and I believe that rc1 > does > >> not currently reflect the actual tip of branch-2.4.1. I'm not going to > >> consider this a deal-breaker, but seems like we should probably clean > it up. > >> > >> To get this all sorted out properly, if we wanted to, I believe we > should > >> do another backport of HDFS-6527 to branch-2.4.1 including only the > amended > >> test case, and create a new RC from that point. > >> > >> Best, > >> Aaron > >> > >> -- > >> Aaron T. Myers > >> Software Engineer, Cloudera > >> > >> > >>> On Fri, Jun 20, 2014 at 11:51 PM, Arun C Murthy > wrote: > >>> > >>> Folks, > >>> > >>> I've created another release candidate (rc1) for hadoop-2.4.1 based on > the > >>> feedback that I would like to push out. > >>> > >>> The RC is available at: > >>> http://people.apache.org/~acmurthy/hadoop-2.4.1-rc1 > >>> The RC tag in svn is here: > >>> https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.1-rc1 > >>> > >>> The maven artifacts are available via repository.apache.org. > >>> > >>> Please try the release and vote; the vote will run for the usual 7 > days. > >>> > >>> thanks, > >>> Arun > >>> > >>> > >>> > >>> -- > >>> Arun C. Murthy > >>> Hortonworks Inc. > >>> http://hortonworks.com/hdp/ > >>> > >>> > >>> > >>> -- > >>> CONFIDENTIALITY NOTICE > >>> NOTICE: This message is intended for the use of the individual or > entity to > >>> which it is addressed and may contain information that is confidential, > >>> privileged and exempt from disclosure under applicable law. If the > reader > >>> of this message is not the intended recipient, you are hereby notified > that > >>> any printing, copying, dissemination, distribution, disclosure or > >>> forwarding of this communication is strictly prohibited. If you have > >>> received this communication in error, please contact the sender > immediately > >>> and delete it from your system. Thank You. > > > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entity > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the reader > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > -- Alejandro
Re: Moving to JDK7, JDK8 and new major releases
Chris, Compiling with jdk7 and doing javac -target 1.6 is not sufficient, you are still using jdk7 libraries and you could use new APIs, thus breaking jdk6 both at compile and runtime. you need to compile with jdk6 to ensure you are not running into that scenario. that is why i was suggesting the nightly jdk6 build/test jenkins job. On Wed, Jun 25, 2014 at 2:04 PM, Chris Nauroth wrote: > I'm also +1 for getting us to JDK7 within the 2.x line after reading the > proposals and catching up on the discussion in this thread. > > Has anyone yet considered how to coordinate this change with downstream > projects? Would we request downstream projects to upgrade to JDK7 first > before we make the move? Would we switch to JDK7, but run javac -target > 1.6 to maintain compatibility for downstream projects during an interim > period? > > Chris Nauroth > Hortonworks > http://hortonworks.com/ > > > > On Wed, Jun 25, 2014 at 9:48 AM, Owen O'Malley wrote: > > > On Tue, Jun 24, 2014 at 4:44 PM, Alejandro Abdelnur > > wrote: > > > > > After reading this thread and thinking a bit about it, I think it > should > > be > > > OK such move up to JDK7 in Hadoop > > > > > > I agree with Alejandro. Changing minimum JDKs is not an incompatible > change > > and is fine in the 2 branch. (Although I think it is would *not* be > > appropriate for a patch release.) Of course we need to do it with > > forethought and testing, but moving off of JDK 6, which is EOL'ed is a > good > > thing. Moving to Java 8 as a minimum seems much too aggressive and I > would > > push back on that. > > > > I'm also think that we need to let the dust settle on the Hadoop 2 line > for > > a while before we talk about Hadoop 3. It seems that it has only been in > > the last 6 months that Hadoop 2 adoption has reached the main stream > users. > > Our user community needs time to digest the changes in Hadoop 2.x before > we > > fracture the community by starting to discuss Hadoop 3 releases. > > > > .. Owen > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: Moving to JDK7, JDK8 and new major releases
After reading this thread and thinking a bit about it, I think it should be OK such move up to JDK7 in Hadoop 2 for the following reasons: * Existing Hadoop 2 releases and related projects are running on JDK7 in production. * Commercial vendors of Hadoop have already done lot of work to ensure Hadoop on JDK7 works while keeping Hadoop on JDK6 working. * Different from many of the 3rd party libraries used by Hadoop, JDK is much stricter on backwards compatibility. IMPORTANT: I take this as an exception and not as a carte blanche for 3rd party dependencies and for moving from JDK7 to JDK8 (though it could OK for the later if we end up in the same state of affairs) Even for Hadoop 2.5, I think we could do the move: * Create the Hadoop 2.5 release branch. * Have one nightly Jenkins job that builds Hadoop 2.5 branch with JDK6 to ensure not JDK7 language/API feature creeps out in Hadoop 2.5. Keep this for all Hadoop 2.5.x releases. * Sanity tests for the Hadoop 2.5.x releases should be done with JDK7. * Apply Steve’s patch to require JDK7 on trunk and branch-2. * Move all Apache Jenkins jobs to build/test using JDK7. * Starting from Hadoop 2.6 we support JDK7 language/API features. Effectively what we are ensuring that Hadoop 2.5.x builds and test with JDK6 & JDK7 and that all tests towards the release are done with JDK7. Users can proactively upgrade to JDK7 before upgrading to Hadoop 2.5.x, or if upgrade to Hadoop 2.5.x and they run into any issue because of JDK6 (which it would be quite unlikely) they can reactively upgrade to JDK7. Thoughts? On Tue, Jun 24, 2014 at 4:22 PM, Andrew Wang wrote: > Hi all, > > On dependencies, we've bumped library versions when we think it's safe and > the APIs in the new version are compatible. Or, it's not leaked to the app > classpath (e.g the JUnit version bump). I think the JIRAs Arun mentioned > fall into one of those categories. Steve can do a better job explaining > this to me, but we haven't bumped things like Jetty or Guava because they > are on the classpath and are not compatible. There is this line in the > compat guidelines: > >- Existing MapReduce, YARN & HDFS applications and frameworks should >work unmodified within a major release i.e. Apache Hadoop ABI is > supported. > > Since Hadoop apps can and do depend on the Hadoop classpath, the classpath > is effectively part of our API. I'm sure there are user apps out there that > will break if we make incompatible changes to the classpath. I haven't read > up on the MR JIRA Arun mentioned, but there MR isn't the only YARN app out > there. > > Sticking to the theme of "work unmodified", let's think about the user > effort required to upgrade their JDK. This can be a very expensive task. It > might need approval up and down the org, meaning lots of certification, > testing, and signoff. Considering the amount of user effort involved here, > it really seems like dropping a JDK is something that should only happen in > a major release. Else, there's the potential for nasty surprises in a > supposedly "minor" release. > > That said, we are in an unhappy place right now regarding JDK6, and it's > true that almost everyone's moved off of JDK6 at this point. So, I'd be > okay with an intermediate 2.x release that drops JDK6 support (but no > incompatible changes to the classpath like Guava). This is basically free, > and we could start using JDK7 idioms like multi-catch and new NIO stuff in > Hadoop code (a minor draw I guess). > > My higher-level goal though is to avoid going through this same pain again > when JDK7 goes EOL. I'd like to do a JDK8-based release before then for > this reason. This is why I suggested skipping an intermediate 2.x+JDK7 > release and leapfrogging to 3.0+JDK8. 10 months is really not that far in > the future, and it seems like a better place to focus our efforts. I was > also hoping it'd be realistic to fix our classpath leakage by then, since > then we'd have a nice, tight, future-proofed new major release. > > Thanks, > Andrew > > > > > On Tue, Jun 24, 2014 at 11:43 AM, Arun C Murthy > wrote: > > > Andrew, > > > > Thanks for starting this thread. I'll edit the wiki to provide more > > context around rolling-upgrades etc. which, as I pointed out in the > > original thread, are key IMHO. > > > > On Jun 24, 2014, at 11:17 AM, Andrew Wang > > wrote: > > > https://wiki.apache.org/hadoop/MovingToJdk7and8 > > > > > > I think based on our current compatibility guidelines, Proposal A is > the > > > most attractive. We're pretty hamstrung by the requirement to keep the > > > classpath the same, which would be solved by either OSGI or shading our > > > deps (but that's a different discussion). > > > > I don't see that anywhere in our current compatibility guidelines. > > > > As you can see from > > > http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html > > we do not have such a policy (pasted here for convenience): > > > > Java C
Re: Plans of moving towards JDK7 in trunk
On Fri, Jun 20, 2014 at 10:02 PM, Arun C Murthy wrote: > > Hadoop 3.x out the door later this year > > +1 that makes sense to me. Thanks for volunteering Steve - I'm glad to > share the pain… ;-) Hey Arun, you may have missed that Andrew volunteered for doing this as well (the thread is long, so easy to miss). Cheers -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.4.1
+1 verified checksum & signature on SRC TARBALL verified CHANGES.txt files run apache-rat:check on SRC build SRC installed pseudo cluster run successfully a few MR sample jobs verified HttpFS Thanks Arun On Mon, Jun 16, 2014 at 9:27 AM, Arun C Murthy wrote: > Folks, > > I've created a release candidate (rc0) for hadoop-2.4.1 (bug-fix release) > that I would like to push out. > > The RC is available at: > http://people.apache.org/~acmurthy/hadoop-2.4.1-rc0 > The RC tag in svn is here: > https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.1-rc0 > > The maven artifacts are available via repository.apache.org. > > Please try the release and vote; the vote will run for the usual 7 days. > > thanks, > Arun > > > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/hdp/ > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
[jira] [Created] (MAPREDUCE-5890) Support for encrypting Intermediate data and spills in local filesystem
Alejandro Abdelnur created MAPREDUCE-5890: - Summary: Support for encrypting Intermediate data and spills in local filesystem Key: MAPREDUCE-5890 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5890 Project: Hadoop Map/Reduce Issue Type: New Feature Components: security Affects Versions: 2.4.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur For some sensitive data, encryption while in flight (network) is not sufficient, it is required that while at rest it should be encrypted. HADOOP-10150 & HDFS-6134 bring encryption at rest for data in filesystem using Hadoop FileSystem API. MapReduce intermediate data and spills should also be encrypted while at rest. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (MAPREDUCE-4658) Move tools JARs into separate lib directories and have common bootstrap script.
[ https://issues.apache.org/jira/browse/MAPREDUCE-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved MAPREDUCE-4658. --- Resolution: Won't Fix [doing self-clean up of JIRAs] scripts have change significantly since this JIRA. > Move tools JARs into separate lib directories and have common bootstrap > script. > --- > > Key: MAPREDUCE-4658 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4658 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Affects Versions: 2.0.2-alpha > Reporter: Alejandro Abdelnur > Assignee: Alejandro Abdelnur > > This is a follow up of the discussion going on on MAPREDUCE-4644 > -- > Moving each tools JARs into separate lib/ dirs it is quite easy (modifying a > single assembly). What we should think is a common bootstrap script for that > so each tool does not have to duplicate (and get wrong) such script. I'll > open a JIRA for that. > -- -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (MAPREDUCE-2608) Mavenize mapreduce contribs
[ https://issues.apache.org/jira/browse/MAPREDUCE-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved MAPREDUCE-2608. --- Resolution: Invalid [doing self-clean up of JIRAs] closing as invalid as this has been done in different jiras. > Mavenize mapreduce contribs > --- > > Key: MAPREDUCE-2608 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-2608 > Project: Hadoop Map/Reduce > Issue Type: Task > Reporter: Alejandro Abdelnur > Assignee: Alejandro Abdelnur > > Same as HADOOP-6671 for mapreduce contribs -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: [VOTE] Release Apache Hadoop 2.3.0
Trying to run the PI MapReduce example using RC0 the job is failing, looking at the NM logs I'm getting the following. I believe it may be something in my setup as many already test MR jobs with this RC successfully, but couldn't figure out yet. Running on OSX 10.9.1 using JDK7. Thanks. -- 2014-02-13 13:12:06,092 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user tucu 2014-02-13 13:12:06,184 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-tucu/nm-local-dir/nmPrivate/container_1392325918406_0001_01_01.tokens to /tmp/hadoop-tucu/nm-local-dir/usercache/tucu/appcache/application_1392325918406_0001/container_1392325918406_0001_01_01.tokens 2014-02-13 13:12:06,184 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set to /tmp/hadoop-tucu/nm-local-dir/usercache/tucu/appcache/application_1392325918406_0001 = file:/tmp/hadoop-tucu/nm-local-dir/usercache/tucu/appcache/application_1392325918406_0001 2014-02-13 13:12:06,957 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: DEBUG: FAILED { hdfs://localhost:9000/tmp/hadoop-yarn/staging/tucu/.staging/job_1392325918406_0001/job.jar, 1392325925016, PATTERN, (?:classes/|lib/).* }, rename destination /tmp/hadoop-tucu/nm-local-dir/usercache/tucu/appcache/application_1392325918406_0001/filecache/10 already exists. 2014-02-13 13:12:06,959 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:9000/tmp/hadoop-yarn/staging/tucu/.staging/job_1392325918406_0001/job.jar transitioned from DOWNLOADING to FAILED -- On Thu, Feb 13, 2014 at 1:06 PM, Sandy Ryza wrote: > +1 (non-binding) > > Built from source and ran jobs on a pseudo-distributed cluster with the > Fair Scheduler > > > On Wed, Feb 12, 2014 at 7:56 PM, Xuan Gong wrote: > > > +1 (non-binding) > > > > downloaded the source tar ball, built, ran a number of MR jobs on a > > single-node cluster and checked the job history from job history server. > > > > > > On Wed, Feb 12, 2014 at 7:53 PM, Gera Shegalov > wrote: > > > > > +1 non-binding > > > > > > - checked out the rc tag and built from source > > > - deployed a pseudo-distributed cluster with > > > yarn.resourcemanager.recovery.enabled=true > > > - ran a sleep job with multiple map waves and a long reducer > > > -- SIGKILL'd AM at various points and verified AM restart > > > -- SIGKILL'd RM at various points and verified RM restart > > > - checked some ui issues we had fixed. > > > - verified the new restful plain text container log NM-WS > > > > > > Thanks, > > > > > > Gera > > > > > > > > > On Tue, Feb 11, 2014 at 6:49 AM, Arun C Murthy > > > wrote: > > > > > > > Folks, > > > > > > > > I've created a release candidate (rc0) for hadoop-2.3.0 that I would > > like > > > > to get released. > > > > > > > > The RC is available at: > > > > http://people.apache.org/~acmurthy/hadoop-2.3.0-rc0 > > > > The RC tag in svn is here: > > > > > https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.3.0-rc0 > > > > > > > > The maven artifacts are available via repository.apache.org. > > > > > > > > Please try the release and vote; the vote will run for the usual 7 > > days. > > > > > > > > thanks, > > > > Arun > > > > > > > > PS: Thanks to Andrew, Vinod & Alejandro for all their help in various > > > > release activities. > > > > -- > > > > CONFIDENTIALITY NOTICE > > > > NOTICE: This message is intended for the use of the individual or > > entity > > > to > > > > which it is addressed and may contain information that is > confidential, > > > > privileged and exempt from disclosure under applicable law. If the > > reader > > > > of this message is not the intended recipient, you are hereby > notified > > > that > > > > any printing, copying, dissemination, distribution, disclosure or > > > > forwarding of this communication is strictly prohibited. If you have > > > > received this communication in error, please contact the sender > > > immediately > > > > and delete it from your system. Thank You. > > > > > > > > > > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entity > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the reader > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.3.0
Running pseudo cluster out of the box (expanding the binary tar, or building from source) does not work, you have to go an set the MR framework to yarn, the default FS URI to hdfs://localhost:8020, and so on. While I don't see this as showstopper (for the knowledgeable user), it will make may users to fail miserably. Plus, running a an example MR job out of the box uses the local runner. If the user does not pay attention to the output will think the job run in the cluster. Should we do a new RC fixing this? Thanks. On Wed, Feb 12, 2014 at 5:10 PM, Zhijie Shen wrote: > +1 (non-binding) > > I download the source tar ball, built from it, ran a number of MR jobs on a > single-node cluster, checked the job history from job history server. > > > On Wed, Feb 12, 2014 at 2:47 PM, Jian He wrote: > > > +1 (non-binding) > > > > Built from source. Ran a few MR sample jobs on a pseudo cluster. > > Everything works fine. > > > > Jian > > > > > > On Wed, Feb 12, 2014 at 2:32 PM, Aaron T. Myers > wrote: > > > > > +1 (binding) > > > > > > I downloaded the source tar ball, checked signatures, built from the > > > source, ran a few of the sample jobs on a pseudo cluster. Everything > was > > as > > > expected. > > > > > > -- > > > Aaron T. Myers > > > Software Engineer, Cloudera > > > > > > > > > On Tue, Feb 11, 2014 at 6:49 AM, Arun C Murthy > > > wrote: > > > > > > > Folks, > > > > > > > > I've created a release candidate (rc0) for hadoop-2.3.0 that I would > > like > > > > to get released. > > > > > > > > The RC is available at: > > > > http://people.apache.org/~acmurthy/hadoop-2.3.0-rc0 > > > > The RC tag in svn is here: > > > > > https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.3.0-rc0 > > > > > > > > The maven artifacts are available via repository.apache.org. > > > > > > > > Please try the release and vote; the vote will run for the usual 7 > > days. > > > > > > > > thanks, > > > > Arun > > > > > > > > PS: Thanks to Andrew, Vinod & Alejandro for all their help in various > > > > release activities. > > > > -- > > > > CONFIDENTIALITY NOTICE > > > > NOTICE: This message is intended for the use of the individual or > > entity > > > to > > > > which it is addressed and may contain information that is > confidential, > > > > privileged and exempt from disclosure under applicable law. If the > > reader > > > > of this message is not the intended recipient, you are hereby > notified > > > that > > > > any printing, copying, dissemination, distribution, disclosure or > > > > forwarding of this communication is strictly prohibited. If you have > > > > received this communication in error, please contact the sender > > > immediately > > > > and delete it from your system. Thank You. > > > > > > > > > > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entity > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the reader > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > > > > > > -- > Zhijie Shen > Hortonworks Inc. > http://hortonworks.com/ > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: Re-swizzle 2.3
sire, as sandy said, lets keep it in branch 2 for now and if not resolved by 2.4 timeframe we'll revert them there. thx Alejandro (phone typing) > On Feb 7, 2014, at 10:14, Steve Loughran wrote: > >> On 6 February 2014 17:07, Alejandro Abdelnur wrote: >> >> Thanks Robert, >> >> All, >> >> >> >> I'm inclined to revert them from branch-2 as well. > -1 to that; if there are issues we should be able to find and fix them soon > enough. Even if you aren't doing long-lived YARN services yet, even llama > benefits from this zero-container-loss on AM restart. > > We do have Hoya using this (introspection code because the protobuf > structures are hidden away), means that you can kill the AM and HBase & > Accumulo clusters stay up in their YARN containers, the restarted AM gets > that list of containers (and any pending events), rebuilds its data > structures and carries on as before. Sweet! > > https://github.com/hortonworks/hoya/blob/develop/hoya-core/src/main/java/org/apache/hoya/yarn/appmaster/HoyaAppMaster.java#L551 > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You.
Re: Re-swizzle 2.3
Vinod, I have the patches to revert most of the JIRAs, the first batch, I'll send them off line to you. Thanks. On Thu, Feb 6, 2014 at 8:56 PM, Vinod Kumar Vavilapalli wrote: > > Thanks. please post your findings, Jian wrote this part of the code and > between him/me, we can take care of those issues. > > +1 for going ahead with the revert on branch-2.3. I'll go do that tomorrow > morning unless I hear otherwise from Jian. > > Thanks, > +Vinod > > > On Feb 6, 2014, at 8:28 PM, Alejandro Abdelnur wrote: > > > Hi Vinod, > > > > Nothing confidential, > > > > * With umanaged AMs I'm seeing the trace I've posted a couple of days ago > > in YARN-1577 ( > > > https://issues.apache.org/jira/browse/YARN-1577?focusedCommentId=13891853&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13891853 > > ). > > > > * Also, Robert has been digging in Oozie testcases failing/getting suck > > with several token renewer threads, this failures happened consistently > at > > different places around the same testcases (like some file descriptors > > leaking out), reverting YARN-1490 fixes the problem. The potential issue > > with this is that a long running client (oozie) my run into this > situation > > thus becoming unstable. > > > > *Robert,* mind posting to YARN-1490 the jvm thread dump at the time of > test > > hanging? > > > > After YARN-1493 & YARN-1490 we have a couple of JIRAs trying to fix > issues > > introduced by them, and we still didn't get them right. > > > > Because this, the improvements driven by YARN-1493 & YARN-1490 seem that > > require more work before being stable. > > > > IMO, being conservative, we should do 2.3 without them and roll them with > > 2.4. If we want to do regular releases we will have to make this kind of > > calls, else we will start dragging the releases. > > > > Sounds like a plan? > > > > Thanks. > > > > > > > > On Thu, Feb 6, 2014 at 6:27 PM, Vinod Kumar Vavilapalli > > wrote: > > > >> Hey > >> > >> I am not against removing them from 2.3 if that is helpful for progress. > >> But I want to understand what the issues are before we make that > decision. > >> > >> There is the issue with unmanaged AM that is clearly known and I was > >> thinking of coming to the past two days, but couldn't. What is this new > >> issue that we (confidently?) pinned down to YARN-1490? > >> > >> Thanks > >> +Vinod > >> > >> On Feb 6, 2014, at 5:07 PM, Alejandro Abdelnur > wrote: > >> > >>> Thanks Robert, > >>> > >>> All, > >>> > >>> So it seems that YARN-1493 and YARN-1490 are introducing serious > >>> regressions. > >>> > >>> I would propose to revert them and the follow up JIRAs from the 2.3 > >> branch > >>> and keep working on them on trunk/branch-2 until the are stable (I > would > >>> even prefer reverting them from branch-2 not to block a 2.4 if they are > >> not > >>> ready in time). > >>> > >>> As I've mentioned before, the list of JIRAs to revert were: > >>> > >>> YARN-1493 > >>> YARN-1490 > >>> YARN-1166 > >>> YARN-1041 > >>> YARN-1566 > >>> > >>> Plus 2 additional JIRAs committed since my email on this issue 2 days > >> ago: > >>> > >>> *YARN-1661 > >>> *YARN-1689 (not sure if this JIRA is related in functionality to the > >>> previous ones but it is creating conflicts). > >>> > >>> I think we should hold on continuing work on top of something that is > >>> broken until the broken stuff is fixed. > >>> > >>> Quoting Arun, "Committers - Henceforth, please use extreme caution > while > >>> committing to branch-2.3. Please commit *only* blockers to 2.3." > >>> > >>> YARN-1661 & YARN-1689 are not blockers. > >>> > >>> Unless there are objections, I'll revert all these JIRAs from > branch-2.3 > >>> tomorrow around noon and I'll update fixedVersion in the JIRAs. > >>> > >>> I'm inclined to revert them from branch-2 as well. > >>> > >>> Thoughts? > >>> > >>> Thanks. > >>> > >>> > >>> On Thu, Feb 6, 2014 at 3:54 PM, Robert K
Re: Re-swizzle 2.3
Hi Vinod, Nothing confidential, * With umanaged AMs I'm seeing the trace I've posted a couple of days ago in YARN-1577 ( https://issues.apache.org/jira/browse/YARN-1577?focusedCommentId=13891853&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13891853 ). * Also, Robert has been digging in Oozie testcases failing/getting suck with several token renewer threads, this failures happened consistently at different places around the same testcases (like some file descriptors leaking out), reverting YARN-1490 fixes the problem. The potential issue with this is that a long running client (oozie) my run into this situation thus becoming unstable. *Robert,* mind posting to YARN-1490 the jvm thread dump at the time of test hanging? After YARN-1493 & YARN-1490 we have a couple of JIRAs trying to fix issues introduced by them, and we still didn't get them right. Because this, the improvements driven by YARN-1493 & YARN-1490 seem that require more work before being stable. IMO, being conservative, we should do 2.3 without them and roll them with 2.4. If we want to do regular releases we will have to make this kind of calls, else we will start dragging the releases. Sounds like a plan? Thanks. On Thu, Feb 6, 2014 at 6:27 PM, Vinod Kumar Vavilapalli wrote: > Hey > > I am not against removing them from 2.3 if that is helpful for progress. > But I want to understand what the issues are before we make that decision. > > There is the issue with unmanaged AM that is clearly known and I was > thinking of coming to the past two days, but couldn't. What is this new > issue that we (confidently?) pinned down to YARN-1490? > > Thanks > +Vinod > > On Feb 6, 2014, at 5:07 PM, Alejandro Abdelnur wrote: > > > Thanks Robert, > > > > All, > > > > So it seems that YARN-1493 and YARN-1490 are introducing serious > > regressions. > > > > I would propose to revert them and the follow up JIRAs from the 2.3 > branch > > and keep working on them on trunk/branch-2 until the are stable (I would > > even prefer reverting them from branch-2 not to block a 2.4 if they are > not > > ready in time). > > > > As I've mentioned before, the list of JIRAs to revert were: > > > > YARN-1493 > > YARN-1490 > > YARN-1166 > > YARN-1041 > > YARN-1566 > > > > Plus 2 additional JIRAs committed since my email on this issue 2 days > ago: > > > > *YARN-1661 > > *YARN-1689 (not sure if this JIRA is related in functionality to the > > previous ones but it is creating conflicts). > > > > I think we should hold on continuing work on top of something that is > > broken until the broken stuff is fixed. > > > > Quoting Arun, "Committers - Henceforth, please use extreme caution while > > committing to branch-2.3. Please commit *only* blockers to 2.3." > > > > YARN-1661 & YARN-1689 are not blockers. > > > > Unless there are objections, I'll revert all these JIRAs from branch-2.3 > > tomorrow around noon and I'll update fixedVersion in the JIRAs. > > > > I'm inclined to revert them from branch-2 as well. > > > > Thoughts? > > > > Thanks. > > > > > > On Thu, Feb 6, 2014 at 3:54 PM, Robert Kanter > wrote: > > > >> I think we should revert YARN-1490 from Hadoop 2.3 branch. I think it > was > >> causing some strange behavior in the Oozie unit tests: > >> > >> Basically, we use a single MiniMRCluster and MiniDFSCluster across all > unit > >> tests in a module. With YARN-1490 we saw that, regardless of test > order, > >> the last few tests would timeout waiting for an MR job to finish; on > slower > >> machines, the entire test suite would timeout. Through some digging, I > >> found that we were getting a ton of "Connection refused" Exceptions on > >> LeaseRenewer talking to the NN and a few on the AM talking to the RM. > >> > >> After a bunch of investigation, I found that the problem went away once > >> YARN-1490 was removed. Though I couldn't figure out the exact problem. > >> Even though this occurred in unit tests, it does make me concerned that > it > >> could indicate some bigger issue in a long-running real cluster (where > >> everything isn't running on the same machine) that we haven't seen yet. > >> > >> > >> > >> On Thu, Feb 6, 2014 at 3:06 PM, Karthik Kambatla > >> wrote: > >> > >>> I have marked MAPREDUCE-5744 a blocker for 2.3. Committing it shortly. > >> Will > >>> pull it
Re: Re-swizzle 2.3
yep, the idea is to pull all of them out from branch2.3. things go back to normal then. thanks Alejandro (phone typing) > On Feb 6, 2014, at 17:39, Zhijie Shen wrote: > > Recently I brought 4 JIRAs to branch-2.3, which are MAPREDUCE-5743, YARN-1628, > YARN-1661 and YARN-1689. Recall that we mark test failure fixes as blockers > for pior releases as closing to release, thus I brought to branch-2.3 > MAPREDUCE-5743 > and YARN-1628 that are the fixes for the test failure on 2.3.0, but didn't > marked them as blockers. Please let me know if I should do that. > > YARN-1661 is a fix for exit log of DS AppMaster, otherwise the exit log of > it will always be failure, which sounds a critical issue to me. Feel free > to pull it out if any objects. > > YARN-1689 is brought to branch-2.3 as YARN-1493 is still in this branch. It > fixes one bug caused by YARN-1493. Those should be included or excluded > together upon the decision. > > Thanks, > Zhijie > > >> On Thu, Feb 6, 2014 at 5:15 PM, Sandy Ryza wrote: >> >> +1 to reverting those JIRAs from branch-2.3. As YARN-1689 is fixing a >> problem caused by YARN-1493 I think we can revert it in branch-2.3 as well. >> >> I think we should leave them in branch-2 for now. We can revert if 2.4 is >> imminent and they're holding it up, but hopefully the issues they caused >> will be fixed by then. >> >> -Sandy >> >> >> On Thu, Feb 6, 2014 at 5:07 PM, Alejandro Abdelnur >> wrote: >> >>> Thanks Robert, >>> >>> All, >>> >>> So it seems that YARN-1493 and YARN-1490 are introducing serious >>> regressions. >>> >>> I would propose to revert them and the follow up JIRAs from the 2.3 >> branch >>> and keep working on them on trunk/branch-2 until the are stable (I would >>> even prefer reverting them from branch-2 not to block a 2.4 if they are >> not >>> ready in time). >>> >>> As I've mentioned before, the list of JIRAs to revert were: >>> >>> YARN-1493 >>> YARN-1490 >>> YARN-1166 >>> YARN-1041 >>> YARN-1566 >>> >>> Plus 2 additional JIRAs committed since my email on this issue 2 days >> ago: >>> >>> *YARN-1661 >>> *YARN-1689 (not sure if this JIRA is related in functionality to the >>> previous ones but it is creating conflicts). >>> >>> I think we should hold on continuing work on top of something that is >>> broken until the broken stuff is fixed. >>> >>> Quoting Arun, "Committers - Henceforth, please use extreme caution while >>> committing to branch-2.3. Please commit *only* blockers to 2.3." >>> >>> YARN-1661 & YARN-1689 are not blockers. >>> >>> Unless there are objections, I'll revert all these JIRAs from branch-2.3 >>> tomorrow around noon and I'll update fixedVersion in the JIRAs. >>> >>> I'm inclined to revert them from branch-2 as well. >>> >>> Thoughts? >>> >>> Thanks. >>> >>> >>> On Thu, Feb 6, 2014 at 3:54 PM, Robert Kanter >>> wrote: >>> >>>> I think we should revert YARN-1490 from Hadoop 2.3 branch. I think it >>> was >>>> causing some strange behavior in the Oozie unit tests: >>>> >>>> Basically, we use a single MiniMRCluster and MiniDFSCluster across all >>> unit >>>> tests in a module. With YARN-1490 we saw that, regardless of test >> order, >>>> the last few tests would timeout waiting for an MR job to finish; on >>> slower >>>> machines, the entire test suite would timeout. Through some digging, I >>>> found that we were getting a ton of "Connection refused" Exceptions on >>>> LeaseRenewer talking to the NN and a few on the AM talking to the RM. >>>> >>>> After a bunch of investigation, I found that the problem went away once >>>> YARN-1490 was removed. Though I couldn't figure out the exact problem. >>>> Even though this occurred in unit tests, it does make me concerned >> that >>> it >>>> could indicate some bigger issue in a long-running real cluster (where >>>> everything isn't running on the same machine) that we haven't seen yet. >>>> >>>> >>>> >>>> On Thu, Feb 6, 2014 at 3:06 PM, Karthik Kambatla >>>> wrote: >>>> >>>>> I have marked MAPRE
Re: Re-swizzle 2.3
Thanks Robert, All, So it seems that YARN-1493 and YARN-1490 are introducing serious regressions. I would propose to revert them and the follow up JIRAs from the 2.3 branch and keep working on them on trunk/branch-2 until the are stable (I would even prefer reverting them from branch-2 not to block a 2.4 if they are not ready in time). As I've mentioned before, the list of JIRAs to revert were: YARN-1493 YARN-1490 YARN-1166 YARN-1041 YARN-1566 Plus 2 additional JIRAs committed since my email on this issue 2 days ago: *YARN-1661 *YARN-1689 (not sure if this JIRA is related in functionality to the previous ones but it is creating conflicts). I think we should hold on continuing work on top of something that is broken until the broken stuff is fixed. Quoting Arun, "Committers - Henceforth, please use extreme caution while committing to branch-2.3. Please commit *only* blockers to 2.3." YARN-1661 & YARN-1689 are not blockers. Unless there are objections, I'll revert all these JIRAs from branch-2.3 tomorrow around noon and I'll update fixedVersion in the JIRAs. I'm inclined to revert them from branch-2 as well. Thoughts? Thanks. On Thu, Feb 6, 2014 at 3:54 PM, Robert Kanter wrote: > I think we should revert YARN-1490 from Hadoop 2.3 branch. I think it was > causing some strange behavior in the Oozie unit tests: > > Basically, we use a single MiniMRCluster and MiniDFSCluster across all unit > tests in a module. With YARN-1490 we saw that, regardless of test order, > the last few tests would timeout waiting for an MR job to finish; on slower > machines, the entire test suite would timeout. Through some digging, I > found that we were getting a ton of "Connection refused" Exceptions on > LeaseRenewer talking to the NN and a few on the AM talking to the RM. > > After a bunch of investigation, I found that the problem went away once > YARN-1490 was removed. Though I couldn't figure out the exact problem. > Even though this occurred in unit tests, it does make me concerned that it > could indicate some bigger issue in a long-running real cluster (where > everything isn't running on the same machine) that we haven't seen yet. > > > > On Thu, Feb 6, 2014 at 3:06 PM, Karthik Kambatla > wrote: > > > I have marked MAPREDUCE-5744 a blocker for 2.3. Committing it shortly. > Will > > pull it out of branch-2.3 if anyone objects. > > > > > > On Thu, Feb 6, 2014 at 2:04 PM, Arpit Agarwal > >wrote: > > > > > Merged HADOOP-10273 to branch-2.3 as r1565456. > > > > > > > > > On Wed, Feb 5, 2014 at 4:49 PM, Arpit Agarwal < > aagar...@hortonworks.com > > > >wrote: > > > > > > > IMO HADOOP-10273 (Fix 'mvn site') should be included in 2.3. > > > > > > > > I will merge it to branch-2.3 tomorrow PST if no one disagrees. > > > > > > > > > > > > On Tue, Feb 4, 2014 at 5:03 PM, Alejandro Abdelnur < > t...@cloudera.com > > > >wrote: > > > > > > > >> IMO YARN-1577 is a blocker, it is breaking unmanaged AMs in a very > odd > > > >> ways > > > >> (to the point it seems un-deterministic). > > > >> > > > >> I'd say eiher YARN-1577 is fixed or we revert > > > >> YARN-1493/YARN-1490/YARN-1166/YARN-1041/YARN-1566 (almost clean > > reverts) > > > >> from Hadoop 2.3 branch before doing the release. > > > >> > > > >> > > > >> I've verified that after reverting those JIRAs things work fine with > > > >> unmanaged AMs. > > > >> > > > >> Thanks. > > > >> > > > >> > > > >> > > > >> > > > >> On Tue, Feb 4, 2014 at 11:45 AM, Arun C Murthy > > > > >> wrote: > > > >> > > > >> > I punted YARN-1444 to 2.4 since it's a long-standing issue. > > > >> > > > > >> > Jian is away and I don't see YARN-1577 & YARN-1206 making much > > > progress > > > >> > till he is back; so I'm inclined to push both to 2.4 too. Any > > > >> objections? > > > >> > > > > >> > Looks like Daryn has both HADOOP-10301 & HDFS-4564 covered. > > > >> > > > > >> > Overall, I'll try get this out in next couple of days if we can > > clear > > > >> the > > > >> > list. > > > >> > > > > >> > thanks, > > > >> > Arun > > > >&
Re: Re-swizzle 2.3
IMO YARN-1577 is a blocker, it is breaking unmanaged AMs in a very odd ways (to the point it seems un-deterministic). I'd say eiher YARN-1577 is fixed or we revert YARN-1493/YARN-1490/YARN-1166/YARN-1041/YARN-1566 (almost clean reverts) from Hadoop 2.3 branch before doing the release. I've verified that after reverting those JIRAs things work fine with unmanaged AMs. Thanks. On Tue, Feb 4, 2014 at 11:45 AM, Arun C Murthy wrote: > I punted YARN-1444 to 2.4 since it's a long-standing issue. > > Jian is away and I don't see YARN-1577 & YARN-1206 making much progress > till he is back; so I'm inclined to push both to 2.4 too. Any objections? > > Looks like Daryn has both HADOOP-10301 & HDFS-4564 covered. > > Overall, I'll try get this out in next couple of days if we can clear the > list. > > thanks, > Arun > > On Feb 3, 2014, at 12:14 PM, Arun C Murthy wrote: > > > An update. Per https://s.apache.org/hadoop-2.3.0-blockers we are now > down to 5 blockers: 1 Common, 1 HDFS, 3 YARN. > > > > Daryn (thanks!) has both the non-YARN covered. Vinod is helping out with > the YARN ones. > > > > thanks, > > Arun > > > > > > > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/ > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
READY: Apache Jenkins job to create Hadoop 2.x release artifacts
*All,* The Apache Jenkins job to build release artifacts for Hadoop 2 is ready (in its first incarnation). The job URL is: https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/ I hope this will make moot all concerns about user accounts and about the hardware ownership being used to generate Hadoop 2 release artifacts. *Andrew & Arun,* Because the two of you are driving/coordinating the release of Hadoop 2.3.0, you'll be the first ones to use the release script and job. If you run into any issues or you have ideas on how to improve it, please don't hesitate to let me know; I'll be more than happy to help. One thing I ask you both is to update the "How to Release" wiki page to reflect all necessary current steps to build a release, specially the manual steps. Having this up to date and complete will help others to take on the Release Manager tasks without being suck into a time sink. *Roman,* If Bigtop wants to consume release artifacts from this job on regular basis, we'll need to modify the job to run periodically. Fell free to do so, just make sure that when running because of a cron trigger the RC_LABEL used states so, i.e.: NIGHTLY. And all the produced artifacts will be labeled with it. Also, if for Bigtop is more convenient to have artifacts without SNAPSHOT JARs even if the Hadoop POMs still have SNAPSHOT in the branch, I have an idea on how we could do that (doing a build with an additional patch -being fetched- on top of the branch). Thanks and cheers to all. -- Alejandro
Re: Apache jenkins job to build release artifacts (WAS: Issue with my username on my company provided dev box?)
I agree on not auto-signing (the script does not do it, on purpose). I was referring to deploying release artifact JARs. OK, then we are done then. Thanks. On Thu, Jan 30, 2014 at 5:02 PM, Roman Shaposhnik wrote: > On Thu, Jan 30, 2014 at 4:43 PM, Alejandro Abdelnur > wrote: > > We could improve this script further to deploy the built JARs to the > Maven > > repo. I don't know how to do this, so it would be great if somebody that > > know how jumps on that. Maybe a s a follow up JIRA, so we have something > > going. > > If you're talking about -SNAPSHOT bits -- there's nothing to do. All > official > build slaves on builds.apache.org are supposed to be setup with the > right configs so that mvn deploy will do the trick. If you're talking about > release artifacts -- it is absolutely NOT a good idea to automate that. > Just like its not a good idea to automate signing the release bits with > your personal key. > > Thanks, > Roman. > -- Alejandro
Apache jenkins job to build release artifacts (WAS: Issue with my username on my company provided dev box?)
[Cross-posting with https://issues.apache.org/jira/browse/HADOOP-10313] OK, we have: * A script, create-release.sh, that creates release artifacts * An Apache Jenkins job that runs the script and produces the artifacts in Apache CI machines, thanks Yahoo! (or shouldn't I say that?) The Apache Jenkins job is: https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/ There you'll see the output of an release build. When triggering the build, you can specify an RC_LABEL (RC0 in this case). If you do so all the artifact files will be postfixed with it. The job is currently producing: * RAT report * SOURCE tarball and its MD5 * BINARY tarball and its MD5 * SITE tarball (ready to plaster in Apache Hadoop site) * CHANGES files I've verified the produced SOURCE is correct and I can build a BINARY out of it. I've verified the produced BINARY tarball works (in pseudo-cluster mode). Running 'hadoop-version' from the BINARY a tarball reports: $ bin/hadoop version Hadoop 2.4.0-SNAPSHOT Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1563020 Compiled by jenkins on 2014-01-31T00:03Z Compiled with protoc 2.5.0 >From source with checksum 37ccb6f84b23196f521243fd192070 Once the JIRA is committed we have to modify the Jenkins job to use the script from 'dev-support/' directory. We could improve this script further to deploy the built JARs to the Maven repo. I don't know how to do this, so it would be great if somebody that know how jumps on that. Maybe a s a follow up JIRA, so we have something going. Thanks. On Thu, Jan 30, 2014 at 8:43 AM, Andrew Purtell wrote: > The Apache Software Foundation takes branding seriously, we all know this. > Making an inquiry about a possible, and I believe unintended, mis-branding > issue involving Apache Hadoop artifacts is not a personal assault. The > hysterical responses here have been unprofessional and disgraceful, and > only serve to reinforce the notion people have outside your walls that you > can't be bothered to treat the larger community with respect. That is not a > petty issue I assure you. > > > On Wed, Jan 29, 2014 at 7:31 PM, Arun C Murthy > wrote: > > > > > Stack, > > > > Apologies for the late response, I just saw this. > > > > On Jan 29, 2014, at 3:33 PM, Stack wrote: > > > > > Slightly related, I just ran into this looking back at my 2.2.0 > download: > > > > > > [stack@c2020 hadoop-2.2.0]$ ./bin/hadoop version > > > Hadoop 2.2.0 > > > Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768 > > > Compiled by hortonmu on 2013-10-07T06:28Z > > > ... > > > > > > Does the apache binary have to be compiled by 'hortonmu'? Could it be > > > compiled by 'arun', or 'apachemu'? > > > > > > Thanks, > > > St.Ack > > > > Thank you for tarring all my work here with a brush by insinuating not > > sure what for using my company provided dev machine to work on Hadoop. > > > > I'll try find a non-company provided dev machine to create future > builds, > > it might take some time because I'll have to go purchase another one. Or, > > maybe, another option is to legally change my name. > > > > Meanwhile, while we are on this topic, I just did: > > > > $ git clone git://git.apache.org/hbase.git > > $ grep -ri cloudera * > > > > Should I file a jira to fix all refs including the following imports of > > org.cloudera.* (pasted below) ... can you please help fix that? There are > > more, but I'll leave it to your discretion. Compared to my username on my > > company provided dev. box, this seems far more egregious. Do you agree? > > > > In future, it might be useful to focus our efforts on moving the project > > forward by contributing/reviewing code/docs etc., rather than on petty > > things like usernames. > > > > thanks, > > Arun > > > > > > > hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java:import > > org.cloudera.htrace.Trace; > > > hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java:import > > org.cloudera.htrace.Span; > > > hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java:import > > org.cloudera.htrace.Trace; > > > hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java:import > > org.cloudera.htrace.Trace; > > > hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java:import > > org.cloudera.htrace.TraceScope; > > > hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestImportTsv.java: > > org.cloudera.htrace.Trace.class); // HTrace > > > hbase-it/src/test/java/org/apache/hadoop/hbase/mttr/IntegrationTestMTTR.java:import > > org.cloudera.htrace.Span; > > > hbase-it/src/test/java/org/apache/hadoop/hbase/mttr/IntegrationTestMTTR.java:import > > org.cloudera.htrace.Trace; > > > hbase-it/src/test/java/org/apache/hadoop/hbase/mttr/IntegrationTestMTTR.java:import > > org.cloudera.htrace.TraceScope; > > > hbase-it/src/test/java/org/apache/hadoop/hbase/mttr/Int
[jira] [Created] (MAPREDUCE-5724) JobHistoryServer does not start if HDFS is not running
Alejandro Abdelnur created MAPREDUCE-5724: - Summary: JobHistoryServer does not start if HDFS is not running Key: MAPREDUCE-5724 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5724 Project: Hadoop Map/Reduce Issue Type: Bug Components: jobhistoryserver Affects Versions: 3.0.0, 2.4.0 Reporter: Alejandro Abdelnur Priority: Critical Starting JHS without HDFS running fails with the following error: {code} STARTUP_MSG: build = git://git.apache.org/hadoop-common.git -r ad74e8850b99e03b0b6435b04f5b3e9995bc3956; compiled by 'tucu' on 2014-01-14T22:40Z STARTUP_MSG: java = 1.7.0_45 / 2014-01-14 16:47:40,264 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: registered UNIX signal handlers for [TERM, HUP, INT] 2014-01-14 16:47:40,883 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2014-01-14 16:47:41,101 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: JobHistory Init 2014-01-14 16:47:41,710 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://localhost:8020/tmp/hadoop-yarn/staging/history/done] org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://localhost:8020/tmp/hadoop-yarn/staging/history/done] at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:505) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:94) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:143) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:207) at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:217) Caused by: java.net.ConnectException: Call From dontknow.local/172.20.10.4 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1410) at org.apache.hadoop.ipc.Client.call(Client.java:1359) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:185) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:671) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1722) at org.apache.hadoop.fs.Hdfs.getFileStatus(Hdfs.java:124) at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106) at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102) at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1514) at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:561) at org.apache.hadoop.ma
[jira] [Resolved] (MAPREDUCE-5722) client-app module failing to compile, missing jersey dependency
[ https://issues.apache.org/jira/browse/MAPREDUCE-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved MAPREDUCE-5722. --- Resolution: Invalid false alarm, it seems I was picking up some stale POMs from my local cache, doing a full clean build when OK. > client-app module failing to compile, missing jersey dependency > --- > > Key: MAPREDUCE-5722 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5722 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build >Affects Versions: 3.0.0, 2.4.0 > Reporter: Alejandro Abdelnur > Assignee: Alejandro Abdelnur >Priority: Blocker > Fix For: 2.4.0 > > > This seems a fallout of YARN-888, oddly enough it did not happen while doing > a full build with the patch before committing. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (MAPREDUCE-5722) client-app module failing to compile, missing jersey dependency
Alejandro Abdelnur created MAPREDUCE-5722: - Summary: client-app module failing to compile, missing jersey dependency Key: MAPREDUCE-5722 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5722 Project: Hadoop Map/Reduce Issue Type: Bug Components: build Affects Versions: 3.0.0, 2.4.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Priority: Blocker Fix For: 2.4.0 This seems a fallout of YARN-888, oddly enough it did not happen while doing a full build with the patch before committing. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: branch development for HADOOP-9639
Chris, I'm already on it. Thanks. On Fri, Dec 6, 2013 at 9:49 AM, Chris Nauroth wrote: > +1 for the idea. The branch committership clause was added for exactly > this kind of scenario. > > From the phrasing in the bylaws, it looks like we'll need assistance from > PMC to get the ball rolling. Is there a PMC member out there who could > volunteer to help start the process with Sangjin? > > Chris Nauroth > Hortonworks > http://hortonworks.com/ > > > > On Mon, Dec 2, 2013 at 11:47 AM, Sangjin Lee wrote: > > > We have been having discussions on HADOOP-9639 (shared cache for jars) > and > > the proposed design there for some time now. We are going to start work > on > > this and have it vetted and reviewed by the community. I have just filed > > some more implementation JIRAs for this feature: YARN-1465, > MAPREDUCE-5662, > > YARN-1466, YARN-1467 > > > > Rather than working privately in our corner and sharing a big patch at > the > > end, I'd like to explore the idea of developing on a branch in the public > > to foster more public feedback. Recently the Hadoop PMC has passed the > > change to the bylaws to allow for branch committers ( > > > > > http://mail-archives.apache.org/mod_mbox/hadoop-general/201307.mbox/%3CCACO5Y4y7HZnn3BS-ZyCVfv-UBcMudeQhndr2vqg%3DXqE1oBiQvQ%40mail.gmail.com%3E > > ), > > and I think it would be a good model for this development. > > > > I'd like to propose a branch development and a branch committer status > for > > a couple of us who are going to work on this per bylaw. Could you please > > let me know what you think? > > > > Thanks, > > Sangjin > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: Next releases
Sound goods, just a little impedance between what seem to be 2 conflicting goals: * what features we target for each release * train releases If we want to do train releases at fixed times, then if a feature is not ready, it catches the next train, no delays of the train because of a feature. If a bug is delaying the train and a feature becomes ready in the mean time and it does not stabilize the release, it can jump on board, if it breaks something, it goes out of the window until the next train. Also we have do decide what we do with 2.2.1. I would say start wrapping up the current 2.2 branch and make it the first train. thx On Wed, Nov 13, 2013 at 12:55 PM, Arun C Murthy wrote: > > On Nov 13, 2013, at 12:38 PM, Sandy Ryza wrote: > > > Here are few patches that I put into 2.2.1 and are minimally invasive, > but > > I don't think are blockers: > > > > YARN-305. Fair scheduler logs too many "Node offered to app" messages. > > YARN-1335. Move duplicate code from FSSchedulerApp and > > FiCaSchedulerApp into SchedulerApplication > > YARN-1333. Support blacklisting in the Fair Scheduler > > YARN-1109. Demote NodeManager "Sending out status for container" logs > > to debug (haosdent via Sandy Ryza) > > YARN-1388. Fair Scheduler page always displays blank fair share > > > > +1 to doing releases at some fixed time interval. > > To be clear, I still think we should be *very* clear about what features > we target for each release (2.3, 2.4, etc.). > > Except, we don't wait infinitely for any specific feature - if we miss a > 4-6 week window a feature goes to the next train. > > Makes sense? > > thanks, > Arun > > > > > -Sandy > > > > > > On Wed, Nov 13, 2013 at 10:10 AM, Arun C Murthy > wrote: > > > >> > >> On Nov 12, 2013, at 1:54 PM, Todd Lipcon wrote: > >> > >>> On Mon, Nov 11, 2013 at 2:57 PM, Colin McCabe >>> wrote: > >>> > To be honest, I'm not aware of anything in 2.2.1 that shouldn't be > there. However, I have only been following the HDFS and common side > of things so I may not have the full picture. Arun, can you give a > specific example of something you'd like to "blow away"? > >> > >> There are bunch of issues in YARN/MapReduce which clearly aren't > >> *critical*, similarly in HDFS a cursory glance showed up some > >> *enhancements*/*improvements* in CHANGES.txt which aren't necessary for > a > >> patch release, plus things like: > >> > >>HADOOP-9623 > >> Update jets3t dependency to 0.9.0 > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> Having said that, the HDFS devs know their code the best. > >> > >>> I agree with Colin. If we've been backporting things into a patch > release > >>> (third version component) which don't belong, we should explicitly call > >> out > >>> those patches, so we can learn from our mistakes and have a discussion > >>> about what belongs. > >> > >> Good point. > >> > >> Here is a straw man proposal: > >> > >> > >> A patch (third version) release should only include *blocker* bugs which > >> are critical from an operational, security or data-integrity issues. > >> > >> This way, we can ensure that a minor series release (2.2.x or 2.3.x or > >> 2.4.x) is always release-able, and more importantly, deploy-able at any > >> point in time. > >> > >> > >> > >> Sandy did bring up a related point about timing of releases and the urge > >> for everyone to cram features/fixes into a dot release. > >> > >> So, we could remedy that situation by doing a release every 4-6 weeks > >> (2.3, 2.4 etc.) and keep the patch releases limited to blocker bugs. > >> > >> Thoughts? > >> > >> thanks, > >> Arun > >> > >> > >> > >> > >> > >> -- > >> CONFIDENTIALITY NOTICE > >> NOTICE: This message is intended for the use of the individual or > entity to > >> which it is addressed and may contain information that is confidential, > >> privileged and exempt from disclosure under applicable law. If the > reader > >> of this message is not the intended recipient, you are hereby notified > that > >> any printing, copying, dissemination, distribution, disclosure or > >> forwarding of this communication is strictly prohibited. If you have > >> received this communication in error, please contact the sender > immediately > >> and delete it from your system. Thank You. > >> > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/ > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: Next releases
Arun, thanks for jumping on this. On hadoop branch-2.2. I've quickly scanned the commit logs starting from the 2.2.0 release and I've found around 20 JIRAs that I like seeing in 2.2.1. Not all of them are bugs but the don't shake anything and improve usability. I presume others will have their own laundry lists as well and I wonder the union of all of them how much adds up to the current 81 commits. How about splitting the JIRAs among a few contributors to assert there is nothing risky in there? And if so get discuss getting rid of those commits for 2.2.1. IMO doing that would be cheaper than selectively applying commits on a fresh branch. Said this, I think we should get 2.2.1 out of the door before switching main efforts to 2.3.0. I volunteer myself to drive 2.2.1 a release if ASAP if you don't have the bandwidth at the moment for it. Cheers. Alejandro Commits in branch-2.2 that I'd like them to be in the 2.2.1 release: The ones prefixed with '*' technically are not bugs. YARN-1284. LCE: Race condition leaves dangling cgroups entries for killed containers. (Alejandro Abdelnur via Sandy Ryza) YARN-1265. Fair Scheduler chokes on unhealthy node reconnect (Sandy Ryza) YARN-1044. used/min/max resources do not display info in the scheduler page (Sangjin Lee via Sandy Ryza) YARN-305. Fair scheduler logs too many "Node offered to app" messages. (Lohit Vijayarenu via Sandy Ryza) *MAPREDUCE-5463. Deprecate SLOTS_MILLIS counters. (Tzuyoshi Ozawa via Sandy Ryza) YARN-1259. In Fair Scheduler web UI, queue num pending and num active apps switched. (Robert Kanter via Sandy Ryza) YARN-1295. In UnixLocalWrapperScriptBuilder, using bash -c can cause Text file busy errors. (Sandy Ryza) *MAPREDUCE-5457. Add a KeyOnlyTextOutputReader to enable streaming to write out text files without separators (Sandy Ryza) *YARN-1258. Allow configuring the Fair Scheduler root queue (Sandy Ryza) *YARN-1288. Make Fair Scheduler ACLs more user friendly (Sandy Ryza) YARN-1330. Fair Scheduler: defaultQueueSchedulingPolicy does not take effect (Sandy Ryza) HDFS-5403. WebHdfs client cannot communicate with older WebHdfs servers post HDFS-5306. Contributed by Aaron T. Myers. *YARN-1335. Move duplicate code from FSSchedulerApp and FiCaSchedulerApp into SchedulerApplication (Sandy Ryza) *YARN-1333. Support blacklisting in the Fair Scheduler (Tsuyoshi Ozawa via Sandy Ryza) *MAPREDUCE-4680. Job history cleaner should only check timestamps of files in old enough directories (Robert Kanter via Sandy Ryza) YARN-1109. Demote NodeManager "Sending out status for container" logs to debug (haosdent via Sandy Ryza) *YARN-1321. Changed NMTokenCache to support both singleton and an instance usage. Contributed by Alejandro Abdelnur YARN-1343. NodeManagers additions/restarts are not reported as node updates in AllocateResponse responses to AMs. (tucu) YARN-1381. Same relaxLocality appears twice in exception message of AMRMClientImpl#checkLocalityRelaxationConflict() (Ted Yu via Sandy Ryza) HADOOP-9898. Set SO_KEEPALIVE on all our sockets. Contributed by Todd Lipcon. YARN-1388. Fair Scheduler page always displays blank fair share (Liyin Liang via Sandy Ryza) On Fri, Nov 8, 2013 at 10:35 PM, Chris Nauroth wrote: > Arun, what are your thoughts on test-only patches? I know I've been > merging a lot of Windows test stabilization patches down to branch-2.2. > These can't rightly be called blockers, but they do improve dev > experience, and there is no risk to product code. > > Chris Nauroth > Hortonworks > http://hortonworks.com/ > > > > On Fri, Nov 8, 2013 at 1:30 AM, Steve Loughran >wrote: > > > On 8 November 2013 02:42, Arun C Murthy wrote: > > > > > Gang, > > > > > > Thinking through the next couple of releases here, appreciate f/b. > > > > > > # hadoop-2.2.1 > > > > > > I was looking through commit logs and there is a *lot* of content here > > > (81 commits as on 11/7). Some are features/improvements and some are > > fixes > > > - it's really hard to distinguish what is important and what isn't. > > > > > > I propose we start with a blank slate (i.e. blow away branch-2.2 and > > > start fresh from a copy of branch-2.2.0) and then be very careful and > > > meticulous about including only *blocker* fixes in branch-2.2. So, most > > of > > > the content here comes via the next minor release (i.e. hadoop-2.3) > > > > > > In future, we continue to be *very* parsimonious about what gets into > a > > > patch release (major.minor.patch) - in general, these should be only > > > *blocker* fixes or key operational issues. > > > > > > >
test-patch failing with OOM errors in javah
The following is happening in builds for MAPREDUCE and YARN patches. I've seen the failures in hadoop5 and hadoop7 machines. I've increased Maven memory to 1GB (export MAVEN_OPTS="-Xmx1024m" in the jenkins jobs) but still some failures persist: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4159/ Does anybody has an idea of what may be going on? thx [INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common --- [INFO] /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-common-project/hadoop-common && /home/jenkins/tools/java/latest/bin/javah -d /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-common-project/hadoop-common/target/native/javah -classpath /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-common-project/hadoop-common/target/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-common-project/hadoop-annotations/target/classes:/home/jenkins/tools/java/jdk1.6.0_26/jre/../lib/tools.jar:/home/jenkins/.m2/repository/com/google/guava/guava/11.0.2/guava-11.0.2.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/jenkins/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/jenkins/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/jenkins/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.1/commons-io-2.1.jar:/home/jenkins/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/jenkins/.m2/repository/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/home/jenkins/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/jenkins/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/jenkins/.m2/repository/com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar:/home/jenkins/.m2/repository/com/sun/jersey/jersey-json/1.9/jersey-json-1.9.jar:/home/jenkins/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/jenkins/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/jenkins/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/jenkins/.m2/repository/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/home/jenkins/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.8/jackson-xc-1.8.8.jar:/home/jenkins/.m2/repository/com/sun/jersey/jersey-server/1.9/jersey-server-1.9.jar:/home/jenkins/.m2/repository/asm/asm/3.2/asm-3.2.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/jenkins/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/jenkins/.m2/repository/net/java/dev/jets3t/jets3t/0.6.1/jets3t-0.6.1.jar:/home/jenkins/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/jenkins/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/jenkins/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/jenkins/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/jenkins/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/jenkins/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/home/jenkins/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.jar:/home/jenkins/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.jar:/home/jenkins/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-common-project/hadoop-auth/target/classes:/home/jenkins/.m2/repository/com/jcraft/jsch/0.1.42/jsch-0.1.42.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/jenkins/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar org.apache.hadoop.io.compress.zlib.ZlibCompressor org.apache.hadoop.io.compress.zlib.ZlibDecompressor org.apache.hadoop.io.compress.bzip2.Bzip2Compressor org.apache.hadoop.io.compress.bzip2.Bzip2Decompressor org.apache.h
Re: [VOTE] Release Apache Hadoop 2.2.0
+1 * downloaded source tarball * verified MD5 * verified signature * verified CHANGES.txt files, release # and date * run 'mvn apache-rat:check' successfully * built distribution * setup speudo cluster * started HDFS/YARN * run some HTTFS tests * run a couple of MR examples * run a few tests using Llama AM On Mon, Oct 7, 2013 at 12:07 PM, Tassapol Athiapinya < tathiapi...@hortonworks.com> wrote: > +1 for the release. > > I have deployed a multinode cluster and extensively tested MR speculative > execution, YARN CLI and YARN distributed shell. There were couple of issues > I encountered while testing MAPREDUCE-5533, YARN-1168, YARN-1167, > YARN-1157, YARN-1131, YARN-1118, YARN-1117 and all of them have been fixed. > > Thanks, > Tassapol > > On Oct 7, 2013, at 12:00 AM, Arun C Murthy wrote: > > > Folks, > > > > I've created a release candidate (rc0) for hadoop-2.2.0 that I would > like to get released - this release fixes a small number of bugs and some > protocol/api issues which should ensure they are now stable and will not > change in hadoop-2.x. > > > > The RC is available at: > http://people.apache.org/~acmurthy/hadoop-2.2.0-rc0 > > The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.2.0-rc0 > > > > The maven artifacts are available via repository.apache.org. > > > > Please try the release and vote; the vote will run for the usual 7 days. > > > > thanks, > > Arun > > > > P.S.: Thanks to Colin, Andrew, Daryn, Chris and others for helping nail > down the symlinks-related issues. I'll release note the fact that we have > disabled it in 2.2. Also, thanks to Vinod for some heavy-lifting on the > YARN side in the last couple of weeks. > > > > > > > > > > > > -- > > Arun C. Murthy > > Hortonworks Inc. > > http://hortonworks.com/ > > > > > > > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entity > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the reader > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: 2.1.2 (Was: Re: [VOTE] Release Apache Hadoop 2.1.1-beta)
Arun, Does this mean that you want to skip a beta release and go straight to GA with the next release? thx On Tue, Oct 1, 2013 at 4:15 PM, Arun C Murthy wrote: > Guys, > > I took a look at the content in 2.1.2-beta so far, other than the > critical fixes such as HADOOP-9984 (symlinks) and few others in YARN/MR, > there is fairly little content (unit tests fixes etc.) > > Furthermore, it's standing up well in testing too. Plus, the protocols > look good for now (I wrote a gohadoop to try convince myself), let's lock > them in. > > Given that, I'm thinking we can just go ahead rename it 2.2.0 rather than > make another 2.1.x release. > > This will drop a short-lived release (2.1.2) and help us move forward on > 2.3 which has a fair bunch of content already... > > Thoughts? > > thanks, > Arun > > > On Sep 24, 2013, at 4:24 PM, Zhijie Shen wrote: > > > I've added MAPREDUCE-5531 to the blocker list. - Zhijie > > > > > > On Tue, Sep 24, 2013 at 3:41 PM, Arun C Murthy > wrote: > > > >> With 4 +1s (3 binding) and no -1s the vote passes. I'll push it out… > I'll > >> make it clear on the release page, that there are some known issues and > >> that we will follow up very shortly with another release. > >> > >> Meanwhile, let's fix the remaining blockers (please mark them as such > with > >> Target Version 2.1.2-beta). > >> The current blockers are here: > >> http://s.apache.org/hadoop-2.1.2-beta-blockers > >> > >> thanks, > >> Arun > >> > >> On Sep 16, 2013, at 11:38 PM, Arun C Murthy > wrote: > >> > >>> Folks, > >>> > >>> I've created a release candidate (rc0) for hadoop-2.1.1-beta that I > >> would like to get released - this release fixes a number of bugs on top > of > >> hadoop-2.1.0-beta as a result of significant amounts of testing. > >>> > >>> If things go well, this might be the last of the *beta* releases of > >> hadoop-2.x. > >>> > >>> The RC is available at: > >> http://people.apache.org/~acmurthy/hadoop-2.1.1-beta-rc0 > >>> The RC tag in svn is here: > >> > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.1-beta-rc0 > >>> > >>> The maven artifacts are available via repository.apache.org. > >>> > >>> Please try the release and vote; the vote will run for the usual 7 > days. > >>> > >>> thanks, > >>> Arun > >>> > >>> > >>> -- > >>> Arun C. Murthy > >>> Hortonworks Inc. > >>> http://hortonworks.com/ > >>> > >>> > >> > >> -- > >> Arun C. Murthy > >> Hortonworks Inc. > >> http://hortonworks.com/ > >> > >> > >> > >> -- > >> CONFIDENTIALITY NOTICE > >> NOTICE: This message is intended for the use of the individual or > entity to > >> which it is addressed and may contain information that is confidential, > >> privileged and exempt from disclosure under applicable law. If the > reader > >> of this message is not the intended recipient, you are hereby notified > that > >> any printing, copying, dissemination, distribution, disclosure or > >> forwarding of this communication is strictly prohibited. If you have > >> received this communication in error, please contact the sender > immediately > >> and delete it from your system. Thank You. > >> > > > > > > > > -- > > Zhijie Shen > > Hortonworks Inc. > > http://hortonworks.com/ > > > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entity > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the reader > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/ > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.1.1-beta
ping On Tue, Sep 24, 2013 at 2:36 AM, Alejandro Abdelnur wrote: > Vote for the 2.1.1-beta release is closing tonight, while we had quite a > few +1s, it seems we need to address the following before doing a release: > > symlink discussion: get a concrete and explicit understanding on what we > will do and in what release(s). > > Also, the following JIRAs seem nasty enough to require a new RC: > > https://issues.apache.org/jira/browse/HDFS-5225 (no patch avail) > https://issues.apache.org/jira/browse/HDFS-5228 (patch avail) > https://issues.apache.org/jira/browse/YARN-1089 (patch avail) > https://issues.apache.org/jira/browse/MAPREDUCE-5529 (patch avail) > > I won't -1 the release but I'm un-casting my vote as I think we should > address these things before. > > Thanks. > > Alejandro > > > On Tue, Sep 24, 2013 at 1:49 AM, Suresh Srinivas > wrote: > >> +1 (binding) >> >> >> Verified the signatures and hashes for both src and binary tars. Built >> from >> the source, the binary distribution and the documentation. Started a >> single >> node cluster and tested the following: >> >> # Started HDFS cluster, verified the hdfs CLI commands such ls, copying >> data back and forth, verified namenode webUI etc. >> >> # Ran some tests such as sleep job, TestDFSIO, NNBench etc. >> >> >> >> >> On Mon, Sep 16, 2013 at 11:38 PM, Arun C Murthy >> wrote: >> >> > Folks, >> > >> > I've created a release candidate (rc0) for hadoop-2.1.1-beta that I >> would >> > like to get released - this release fixes a number of bugs on top of >> > hadoop-2.1.0-beta as a result of significant amounts of testing. >> > >> > If things go well, this might be the last of the *beta* releases of >> > hadoop-2.x. >> > >> > The RC is available at: >> > http://people.apache.org/~acmurthy/hadoop-2.1.1-beta-rc0 >> > The RC tag in svn is here: >> > >> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.1-beta-rc0 >> > >> > The maven artifacts are available via repository.apache.org. >> > >> > Please try the release and vote; the vote will run for the usual 7 days. >> > >> > thanks, >> > Arun >> > >> > >> > -- >> > Arun C. Murthy >> > Hortonworks Inc. >> > http://hortonworks.com/ >> > >> > >> > >> > -- >> > CONFIDENTIALITY NOTICE >> > NOTICE: This message is intended for the use of the individual or >> entity to >> > which it is addressed and may contain information that is confidential, >> > privileged and exempt from disclosure under applicable law. If the >> reader >> > of this message is not the intended recipient, you are hereby notified >> that >> > any printing, copying, dissemination, distribution, disclosure or >> > forwarding of this communication is strictly prohibited. If you have >> > received this communication in error, please contact the sender >> immediately >> > and delete it from your system. Thank You. >> > >> >> >> >> -- >> http://hortonworks.com/download/ >> >> -- >> CONFIDENTIALITY NOTICE >> NOTICE: This message is intended for the use of the individual or entity >> to >> which it is addressed and may contain information that is confidential, >> privileged and exempt from disclosure under applicable law. If the reader >> of this message is not the intended recipient, you are hereby notified >> that >> any printing, copying, dissemination, distribution, disclosure or >> forwarding of this communication is strictly prohibited. If you have >> received this communication in error, please contact the sender >> immediately >> and delete it from your system. Thank You. >> > > > > -- > Alejandro > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.1.1-beta
Vote for the 2.1.1-beta release is closing tonight, while we had quite a few +1s, it seems we need to address the following before doing a release: symlink discussion: get a concrete and explicit understanding on what we will do and in what release(s). Also, the following JIRAs seem nasty enough to require a new RC: https://issues.apache.org/jira/browse/HDFS-5225 (no patch avail) https://issues.apache.org/jira/browse/HDFS-5228 (patch avail) https://issues.apache.org/jira/browse/YARN-1089 (patch avail) https://issues.apache.org/jira/browse/MAPREDUCE-5529 (patch avail) I won't -1 the release but I'm un-casting my vote as I think we should address these things before. Thanks. Alejandro On Tue, Sep 24, 2013 at 1:49 AM, Suresh Srinivas wrote: > +1 (binding) > > > Verified the signatures and hashes for both src and binary tars. Built from > the source, the binary distribution and the documentation. Started a single > node cluster and tested the following: > > # Started HDFS cluster, verified the hdfs CLI commands such ls, copying > data back and forth, verified namenode webUI etc. > > # Ran some tests such as sleep job, TestDFSIO, NNBench etc. > > > > > On Mon, Sep 16, 2013 at 11:38 PM, Arun C Murthy > wrote: > > > Folks, > > > > I've created a release candidate (rc0) for hadoop-2.1.1-beta that I would > > like to get released - this release fixes a number of bugs on top of > > hadoop-2.1.0-beta as a result of significant amounts of testing. > > > > If things go well, this might be the last of the *beta* releases of > > hadoop-2.x. > > > > The RC is available at: > > http://people.apache.org/~acmurthy/hadoop-2.1.1-beta-rc0 > > The RC tag in svn is here: > > > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.1-beta-rc0 > > > > The maven artifacts are available via repository.apache.org. > > > > Please try the release and vote; the vote will run for the usual 7 days. > > > > thanks, > > Arun > > > > > > -- > > Arun C. Murthy > > Hortonworks Inc. > > http://hortonworks.com/ > > > > > > > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entity > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the reader > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > > > > > > -- > http://hortonworks.com/download/ > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.1.1-beta
Are we doing a new RC for 2.1.1-beta? On Mon, Sep 23, 2013 at 9:04 PM, Vinod Kumar Vavilapalli wrote: > Correct me if I am wrong, but FWIU, we already released a beta with the > same symlink issues. Given 2.1.1 is just another beta, I believe we can go > ahead with it and resolve the issues in the final GA release. Instead of > resetting the testing done by everyone. > > It's a hard story to sell but beta phase is supposed to be only about > bug-fixes, but incompatible changes that cannot be avoided, well cannot be > avoided. > > Thanks, > +Vinod > > On Sep 23, 2013, at 11:42 AM, Andrew Wang wrote: > > We still need to resolve some symlink issues; are we planning to spin a new > RC? Leaving it as-is is not a good option. > > > On Sun, Sep 22, 2013 at 11:23 PM, Roman Shaposhnik wrote: > > On Mon, Sep 16, 2013 at 11:38 PM, Arun C Murthy > > wrote: > > Folks, > > > I've created a release candidate (rc0) for hadoop-2.1.1-beta that I > > would like to get > > released - this release fixes a number of bugs on top of > > hadoop-2.1.0-beta as a result of significant amounts of testing. > > > If things go well, this might be the last of the *beta* releases of > > hadoop-2.x. > > > The RC is available at: > > http://people.apache.org/~acmurthy/hadoop-2.1.1-beta-rc0 > > The RC tag in svn is here: > > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.1-beta-rc0 > > > The maven artifacts are available via repository.apache.org. > > > Please try the release and vote; the vote will run for the usual 7 days. > > > Short of HDFS-5225 from the Bigtop perspective this RC gets a +1. > > > All tests passed in both secure and unsecure modes in 4 nodes > > pseudo distributed cluster with all the members of Hadoop > > ecosystem running smoke tests. > > > Thanks, > > Roman. > > > > > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity > to which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.1.1-beta
On Wed, Sep 18, 2013 at 12:03 AM, Karthik Kambatla wrote: > Not sure if this should be a blocker for 2.1.1, but filed HADOOP-9976 to > have a single version of avro. > > It depends if there is a known and non-workaroundable issue at runtime because of this. If not I wouldn't say it is a blocker. thx
Re: [VOTE] Release Apache Hadoop 2.1.1-beta
Thanks Arun. +1 * Downloaded source tarball. * Verified MD5 * Verified signature * run apache-rat:check ok after minor tweak (see NIT1 below) * checked CHANGES.txt headers (see NIT2 below) * built DIST from source * verified hadoop version of Hadoop JARs * configured pseudo cluster * tested HttpFS * run a few MR examples * run a few unmanaged AM app examples The following NITs should be addressed if there is a new RC or in the next release -- NIT1, empty files that make apache-rat:check to fail, these files should be removed: * /Users/tucu/Downloads/h/hadoop-2.1.1-beta-src/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextSymlinkBaseTest.java * /Users/tucu/Downloads/h/hadoop-2.1.1-beta-src/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextSymlink.java * /Users/tucu/Downloads/h/hadoop-2.1.1-beta-src/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSymlink.java -- NIT2, common/hdfs/mapreduce/yarn CHANGES.txt have 2.2.0 header, they should not -- On Tue, Sep 17, 2013 at 8:38 AM, Arun C Murthy wrote: > Folks, > > I've created a release candidate (rc0) for hadoop-2.1.1-beta that I would > like to get released - this release fixes a number of bugs on top of > hadoop-2.1.0-beta as a result of significant amounts of testing. > > If things go well, this might be the last of the *beta* releases of > hadoop-2.x. > > The RC is available at: > http://people.apache.org/~acmurthy/hadoop-2.1.1-beta-rc0 > The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.1-beta-rc0 > > The maven artifacts are available via repository.apache.org. > > Please try the release and vote; the vote will run for the usual 7 days. > > thanks, > Arun > > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/ > > > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- Alejandro
[jira] [Resolved] (MAPREDUCE-5379) Include token tracking ids in jobconf
[ https://issues.apache.org/jira/browse/MAPREDUCE-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved MAPREDUCE-5379. --- Resolution: Fixed Fix Version/s: 2.1.1-beta Hadoop Flags: Reviewed Thanks Karthik. Committed to trunk, branch-2 and branch-2.1-beta. > Include token tracking ids in jobconf > - > > Key: MAPREDUCE-5379 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5379 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: job submission, security >Affects Versions: 2.1.0-beta >Reporter: Sandy Ryza >Assignee: Karthik Kambatla > Fix For: 2.1.1-beta > > Attachments: MAPREDUCE-5379-1.patch, MAPREDUCE-5379-2.patch, > MAPREDUCE-5379.patch, mr-5379-3.patch, mr-5379-4.patch > > > HDFS-4680 enables audit logging delegation tokens. By storing the tracking > ids in the job conf, we can enable tracking what files each job touches. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (MAPREDUCE-5483) revert MAPREDUCE-5357
Alejandro Abdelnur created MAPREDUCE-5483: - Summary: revert MAPREDUCE-5357 Key: MAPREDUCE-5483 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5483 Project: Hadoop Map/Reduce Issue Type: Bug Components: distcp Affects Versions: 2.1.0-beta Reporter: Alejandro Abdelnur Fix For: 2.1.1-beta MAPREDUCE-5357 does a fileystem chown() operation. chown() is not valid unless you are superuser. if you a chown() to yourself is a NOP, that is why has not been detected in Hadoop testcases where user is running as itself. However, in distcp testcases run by Oozie which use test users/groups from UGI for minicluster it is failing because of this chown() either because the test user does not exist of because the current use does not have privileges to do a chown(). We should revert MAPREDUCE-5357. Windows should handle this with some conditional logic used only when running in Windows. Opening a new JIRA and not reverting directly because MAPREDUCE-5357 went in 2.1.0-beta. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (MAPREDUCE-5473) JT webservices use a static SimpleDateFormat, SImpleDateFormat is not threadsafe
Alejandro Abdelnur created MAPREDUCE-5473: - Summary: JT webservices use a static SimpleDateFormat, SImpleDateFormat is not threadsafe Key: MAPREDUCE-5473 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5473 Project: Hadoop Map/Reduce Issue Type: Bug Components: mrv1 Affects Versions: 1.2.0 Reporter: Alejandro Abdelnur MAPREDUCE-4837 is doing: {code} <%!static SimpleDateFormat dateFormat = new SimpleDateFormat( "d-MMM- HH:mm:ss"); {code} But SimpleDateFormat is not thread safe. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [VOTE] Release Apache Hadoop 2.0.6-alpha (RC1)
+1 Downloaded source tarball Verified MD5 Verified Signature Run apache-rat:check Did a dist build Started pseudo cluster Run a couple of MR examples Tested HttpFS On Thu, Aug 15, 2013 at 10:29 PM, Konstantin Boudnik wrote: > All, > > I have created a release candidate (rc1) for hadoop-2.0.6-alpha that I > would > like to release. > > This is a stabilization release that includes fixed for a couple a of > issues > as outlined on the security list. > > The RC is available at: > http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc1/ > The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc1 > > The maven artifacts are available via repository.apache.org. > > The only difference between rc0 and rc1 is ASL added to releasenotes.html > and > updated release dates in CHANGES.txt files. > > Please try the release bits and vote; the vote will run for the usual 7 > days. > > Thanks for your voting > Cos > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.0.6-alpha
it should be straight forward adding the license headers to the release notes. please make sure apache-rat:check passes on the RC before publishing it. Arun, as you are about to cut the new RC for 2.1.0-beta, can you please make sure the license headers are used in the releasenotes HTML files? Thx On Thu, Aug 15, 2013 at 8:02 PM, Konstantin Boudnik wrote: > Alejandro, > > looking into the source code: it seems that release notes never had license > boilerplate in it, hence 2.0.6-alpha doesn't have as well. > > I have fixed CHANGES with new optimistic date of the release and upload rc1 > right now. > > Please let me know if you feel like we need start doing the license for the > releasenotes in this release. > > Thanks, > Cos > > On Wed, Aug 14, 2013 at 10:40AM, Alejandro Abdelnur wrote: > > OK: > > * verified MD5 > > * verified signature > > * expanded source tar and did a build > > * configured pseudo cluster and run a couple of example MR jobs > > * did a few HTTP calls to HTTFS > > > > NOT OK: > > * CHANGES.txt files have 2.0.6 as UNRELEASED, they should have the date > the > > RC vote ends > > * 'mvn apache-rat:check' fails, releasenotes HTML files don't have > license > > headers, > > > > I think we need to address the NO OK points (specially the last one), > they > > are trivial. > > > > Thanks. > > > > > > > > On Sat, Aug 10, 2013 at 5:46 PM, Konstantin Boudnik > wrote: > > > > > All, > > > > > > I have created a release candidate (rc0) for hadoop-2.0.6-alpha that I > > > would > > > like to release. > > > > > > This is a stabilization release that includes fixed for a couple a of > > > issues > > > as outlined on the security list. > > > > > > The RC is available at: > > > http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc0/ > > > The RC tag in svn is here: > > > > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc0 > > > > > > The maven artifacts are available via repository.apache.org. > > > > > > Please try the release bits and vote; the vote will run for the usual 7 > > > days. > > > > > > Thanks for your voting > > > Cos > > > > > > > > > > > > -- > > Alejandro > -- Alejandro
Re: [ACTION NEEDED]: protoc 2.5.0 in trunk/branch-2/branch-2.1-beta/branch-2.1.0-beta
forgot to add: A big thanks to Rajiv and Giri for helping out with the changes in the Jenkins boxes. On Wed, Aug 14, 2013 at 4:03 PM, Alejandro Abdelnur wrote: > Following up on this. > > HADOOP-9845 & HADOOP-9872 have been committed > to trunk/branch-2/branch-2.1-beta/branch-2.1.0-beta. > > All Hadoop developers must install protoc 2.5.0 in their development > machines for the build to run. > > All Hadoop jenkins boxes are using protoc 2.5.0 > > The BUILDING.txt file has been updated to reflect that protoc 2.5.0 is the > required one and includes instructions on how to use a different protoc > from multiple local versions (using an ENV var). This may be handy for > folks working with Hadoop versions using protoc 2.4.1. > > INTERIM SOLUTION IF YOU CANNOT UPGRADE TO PROTOC 2.5.0 IMMEDIATELY > > Use the following option with all your Maven commands > '-Dprotobuf.version=2.4.1'. > > Note that this option will make the build use protoc and protobuf 2.4.1. > > Though you should upgrade to 2.5.0 at the earliest. > > As soon as we start using the new goodies from protobuf 2.5.0 (like the > non-copy bytearrays) 2.4.1 will not work anymore. > > Thanks and apologies again for the noise through out this change. > > -- > Alejandro > -- Alejandro
[ACTION NEEDED]: protoc 2.5.0 in trunk/branch-2/branch-2.1-beta/branch-2.1.0-beta
Following up on this. HADOOP-9845 & HADOOP-9872 have been committed to trunk/branch-2/branch-2.1-beta/branch-2.1.0-beta. All Hadoop developers must install protoc 2.5.0 in their development machines for the build to run. All Hadoop jenkins boxes are using protoc 2.5.0 The BUILDING.txt file has been updated to reflect that protoc 2.5.0 is the required one and includes instructions on how to use a different protoc from multiple local versions (using an ENV var). This may be handy for folks working with Hadoop versions using protoc 2.4.1. INTERIM SOLUTION IF YOU CANNOT UPGRADE TO PROTOC 2.5.0 IMMEDIATELY Use the following option with all your Maven commands '-Dprotobuf.version=2.4.1'. Note that this option will make the build use protoc and protobuf 2.4.1. Though you should upgrade to 2.5.0 at the earliest. As soon as we start using the new goodies from protobuf 2.5.0 (like the non-copy bytearrays) 2.4.1 will not work anymore. Thanks and apologies again for the noise through out this change. -- Alejandro
[UPDATE 3] Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
test-patch came back. I'll commit to trunk and all "2" branches. Once done I'll send an email indicating new protoc is required for development. Thanks. On Wed, Aug 14, 2013 at 10:51 AM, Alejandro Abdelnur wrote: > I've filed https://issues.apache.org/jira/browse/HADOOP-9872 addressing > the following: > > - > >- handles protoc version correctly independently of the exit code >- if HADOOP_PROTOC_PATH env var is defined, it uses it as the protoc >executable * if HADOOP_PROTOC_PATH is not defined, it picks protoc from the >PATH >- documentation updated to reflect 2.5.0 is required >- enforces the version of protoc and protobuf JAR are the same >- Added to VersionInfo the protoc version used (sooner or later this >will be useful for in a troubleshooting situation). > > Luke Lu<https://issues.apache.org/jira/secure/ViewProfile.jspa?name=vicaya> > suggested > to make the version check for protoc lax (i.e. 2.5.*). While working on the > patch I've thought about that. But that would introduce a potential > mismatch between protoc and protobuff JAR. > > Still If you want to use different version of protoc/protobuff from the > one defined in the POM, you can use the -Dprotobuf.version= to specify > your alternate version. But I would recommend not to do this, because if > you publish the artifacts to a Maven repo, the fact you used > -Dprotobuf.version= will be lost and the version defined in the POM > properties will be used (IMO Maven should use the effective POM on deploy, > but they don't). > > - > > It would be great if a few people test the patch locally. > > Once this is committed to trunk I'll bacport HADOOP-9845 & HADOOP-9872 to > all the 2 branches. > > Thx. > > On Tue, Aug 13, 2013 at 1:09 PM, Alejandro Abdelnur wrote: > >> >> There is no indication that protoc 2.5.0 is breaking anything. >> >> Hadoop-trunk builds have been failing way before 1/2 way with: >> >> --- >> >> >> [ERROR] Failed to execute goal >> org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on >> project hadoop-yarn-client: ExecutionException; nested exception is >> java.util.concurrent.ExecutionException: java.lang.RuntimeException: The >> forked VM terminated without saying properly goodbye. VM crash or >> System.exit called ? -> [Help 1] >> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute >> goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test >> (default-test) on project hadoop-yarn-client: ExecutionException; nested >> exception is java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: The forked VM terminated without saying properly >> goodbye. VM crash or System.exit called ? >> >> --- >> >> >> The Hadoop-trunk #480 build failed with a JVM abort in a testcase towards >> the end of mapreduce tests. >> >> Until then there were no failures at all. >> >> I've increased heap size and tried a second run and the failure was >> earlier. >> >> I've looked a Hadoop-trunk builds prior to the HADOOP-9845 and it has >> been failing the same way in all the kept builds. >> >> We need to fix Hadoop-trunk builds independently of this. >> >> Any objection to commit HADOOP-9845 to branch-2 and the 2.1.0-beta >> branches to get all the other jenkins jobs working? >> >> I'll wait till tomorrow morning before proceeding. >> >> Thx >> >> >> >> >> On Mon, Aug 12, 2013 at 8:35 PM, Alejandro Abdelnur >> wrote: >> >>> Jenkins is running a full test run on trunk using protoc 2.5.0. >>> >>> https://builds.apache.org/job/Hadoop-trunk/480 >>> >>> And it seems go be going just fine. >>> >>> If everything looks OK, I'm planing to backport HADOOP-9845 to the >>> 2.1.0-beta branch midday PST tomorrow. This will normalize all builds >>> failures do the protoc mismatch. >>> >>> Thanks. >>> >>> Alejandro >>> >>> >>> On Mon, Aug 12, 2013 at 5:53 PM, Alejandro Abdelnur >>> wrote: >>> >>>> shooting to get it i n for 2.1.0. >>>> >>>> at moment is in trunk till the nightly finishes. then we'll decide >>>> >>>> in the mean time, you can have multiple versions installed in diff dirs >>>> and set the right one in the path >>>> >>>> thx >>>> >>>> Alejan
[UPDATE 2] Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
I've filed https://issues.apache.org/jira/browse/HADOOP-9872 addressing the following: - - handles protoc version correctly independently of the exit code - if HADOOP_PROTOC_PATH env var is defined, it uses it as the protoc executable * if HADOOP_PROTOC_PATH is not defined, it picks protoc from the PATH - documentation updated to reflect 2.5.0 is required - enforces the version of protoc and protobuf JAR are the same - Added to VersionInfo the protoc version used (sooner or later this will be useful for in a troubleshooting situation). Luke Lu <https://issues.apache.org/jira/secure/ViewProfile.jspa?name=vicaya> suggested to make the version check for protoc lax (i.e. 2.5.*). While working on the patch I've thought about that. But that would introduce a potential mismatch between protoc and protobuff JAR. Still If you want to use different version of protoc/protobuff from the one defined in the POM, you can use the -Dprotobuf.version= to specify your alternate version. But I would recommend not to do this, because if you publish the artifacts to a Maven repo, the fact you used -Dprotobuf.version= will be lost and the version defined in the POM properties will be used (IMO Maven should use the effective POM on deploy, but they don't). - It would be great if a few people test the patch locally. Once this is committed to trunk I'll bacport HADOOP-9845 & HADOOP-9872 to all the 2 branches. Thx. On Tue, Aug 13, 2013 at 1:09 PM, Alejandro Abdelnur wrote: > > There is no indication that protoc 2.5.0 is breaking anything. > > Hadoop-trunk builds have been failing way before 1/2 way with: > > --- > > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on > project hadoop-yarn-client: ExecutionException; nested exception is > java.util.concurrent.ExecutionException: java.lang.RuntimeException: The > forked VM terminated without saying properly goodbye. VM crash or System.exit > called ? -> [Help 1] > org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute > goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test > (default-test) on project hadoop-yarn-client: ExecutionException; nested > exception is java.util.concurrent.ExecutionException: > java.lang.RuntimeException: The forked VM terminated without saying properly > goodbye. VM crash or System.exit called ? > > --- > > > The Hadoop-trunk #480 build failed with a JVM abort in a testcase towards > the end of mapreduce tests. > > Until then there were no failures at all. > > I've increased heap size and tried a second run and the failure was > earlier. > > I've looked a Hadoop-trunk builds prior to the HADOOP-9845 and it has been > failing the same way in all the kept builds. > > We need to fix Hadoop-trunk builds independently of this. > > Any objection to commit HADOOP-9845 to branch-2 and the 2.1.0-beta > branches to get all the other jenkins jobs working? > > I'll wait till tomorrow morning before proceeding. > > Thx > > > > > On Mon, Aug 12, 2013 at 8:35 PM, Alejandro Abdelnur wrote: > >> Jenkins is running a full test run on trunk using protoc 2.5.0. >> >> https://builds.apache.org/job/Hadoop-trunk/480 >> >> And it seems go be going just fine. >> >> If everything looks OK, I'm planing to backport HADOOP-9845 to the >> 2.1.0-beta branch midday PST tomorrow. This will normalize all builds >> failures do the protoc mismatch. >> >> Thanks. >> >> Alejandro >> >> >> On Mon, Aug 12, 2013 at 5:53 PM, Alejandro Abdelnur wrote: >> >>> shooting to get it i n for 2.1.0. >>> >>> at moment is in trunk till the nightly finishes. then we'll decide >>> >>> in the mean time, you can have multiple versions installed in diff dirs >>> and set the right one in the path >>> >>> thx >>> >>> Alejandro >>> (phone typing) >>> >>> On Aug 12, 2013, at 17:47, Konstantin Shvachko >>> wrote: >>> >>> > Ok. After installing protobuf 2.5.0 I can compile trunk. >>> > But now I cannot compile Hadoop-2 branches. None of them. >>> > So if I switch between branches I need to reinstall protobuf? >>> > >>> > Is there a consensus about going towards protobuf 2.5.0 upgrade in ALL >>> > versions? >>> > I did not get definite impression there is. >>> > If not it could be a pretty big disruption. >>> > >>> > Thanks, >>> > --Konst >>> > >>> > >>> >
Re: [VOTE] Release Apache Hadoop 2.0.6-alpha
OK: * verified MD5 * verified signature * expanded source tar and did a build * configured pseudo cluster and run a couple of example MR jobs * did a few HTTP calls to HTTFS NOT OK: * CHANGES.txt files have 2.0.6 as UNRELEASED, they should have the date the RC vote ends * 'mvn apache-rat:check' fails, releasenotes HTML files don't have license headers, I think we need to address the NO OK points (specially the last one), they are trivial. Thanks. On Sat, Aug 10, 2013 at 5:46 PM, Konstantin Boudnik wrote: > All, > > I have created a release candidate (rc0) for hadoop-2.0.6-alpha that I > would > like to release. > > This is a stabilization release that includes fixed for a couple a of > issues > as outlined on the security list. > > The RC is available at: > http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc0/ > The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc0 > > The maven artifacts are available via repository.apache.org. > > Please try the release bits and vote; the vote will run for the usual 7 > days. > > Thanks for your voting > Cos > > -- Alejandro
Re: [UPDATE] Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
There is no indication that protoc 2.5.0 is breaking anything. Hadoop-trunk builds have been failing way before 1/2 way with: --- [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-yarn-client: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-yarn-client: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? --- The Hadoop-trunk #480 build failed with a JVM abort in a testcase towards the end of mapreduce tests. Until then there were no failures at all. I've increased heap size and tried a second run and the failure was earlier. I've looked a Hadoop-trunk builds prior to the HADOOP-9845 and it has been failing the same way in all the kept builds. We need to fix Hadoop-trunk builds independently of this. Any objection to commit HADOOP-9845 to branch-2 and the 2.1.0-beta branches to get all the other jenkins jobs working? I'll wait till tomorrow morning before proceeding. Thx On Mon, Aug 12, 2013 at 8:35 PM, Alejandro Abdelnur wrote: > Jenkins is running a full test run on trunk using protoc 2.5.0. > > https://builds.apache.org/job/Hadoop-trunk/480 > > And it seems go be going just fine. > > If everything looks OK, I'm planing to backport HADOOP-9845 to the > 2.1.0-beta branch midday PST tomorrow. This will normalize all builds > failures do the protoc mismatch. > > Thanks. > > Alejandro > > > On Mon, Aug 12, 2013 at 5:53 PM, Alejandro Abdelnur wrote: > >> shooting to get it i n for 2.1.0. >> >> at moment is in trunk till the nightly finishes. then we'll decide >> >> in the mean time, you can have multiple versions installed in diff dirs >> and set the right one in the path >> >> thx >> >> Alejandro >> (phone typing) >> >> On Aug 12, 2013, at 17:47, Konstantin Shvachko >> wrote: >> >> > Ok. After installing protobuf 2.5.0 I can compile trunk. >> > But now I cannot compile Hadoop-2 branches. None of them. >> > So if I switch between branches I need to reinstall protobuf? >> > >> > Is there a consensus about going towards protobuf 2.5.0 upgrade in ALL >> > versions? >> > I did not get definite impression there is. >> > If not it could be a pretty big disruption. >> > >> > Thanks, >> > --Konst >> > >> > >> > >> > On Mon, Aug 12, 2013 at 3:19 PM, Alejandro Abdelnur > >wrote: >> > >> >> I've just committed HADOOP-9845 to trunk (only trunk at the moment). >> >> >> >> To build trunk now you need protoc 2.5.0 (the build will fail with a >> >> warning if you don't have it). >> >> >> >> We'd propagate this to the 2 branches once the precommit build is back >> to >> >> normal and see things are OK. >> >> >> >> Thanks. >> >> >> >> >> >> On Mon, Aug 12, 2013 at 2:57 PM, Alejandro Abdelnur > >>> wrote: >> >> >> >>> About to commit HADOOP-9845 to trunk, in 5 mins. This will make trunk >> use >> >>> protoc 2.5.0. >> >>> >> >>> thx >> >>> >> >>> >> >>> On Mon, Aug 12, 2013 at 11:47 AM, Giridharan Kesavan < >> >>> gkesa...@hortonworks.com> wrote: >> >>> >> >>>> I can take care of re-installing 2.4 and installing 2.5 in a >> different >> >>>> location. This would fix 2.0 branch builds as well. >> >>>> Thoughts? >> >>>> >> >>>> -Giri >> >>>> >> >>>> >> >>>> On Mon, Aug 12, 2013 at 11:37 AM, Alejandro Abdelnur < >> t...@cloudera.com >> >>>>> wrote: >> >>>> >> >>>>> Giri, >> >>>>> >> >>>>> first of all, thanks for installing protoc 2.5.0. >> >>>>> >> >>>>> I didn't know we were installing them as the only version and not >> >>>> driven by >> >>>>> env/path settings. >>
[UPDATE] Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
Jenkins is running a full test run on trunk using protoc 2.5.0. https://builds.apache.org/job/Hadoop-trunk/480 And it seems go be going just fine. If everything looks OK, I'm planing to backport HADOOP-9845 to the 2.1.0-beta branch midday PST tomorrow. This will normalize all builds failures do the protoc mismatch. Thanks. Alejandro On Mon, Aug 12, 2013 at 5:53 PM, Alejandro Abdelnur wrote: > shooting to get it i n for 2.1.0. > > at moment is in trunk till the nightly finishes. then we'll decide > > in the mean time, you can have multiple versions installed in diff dirs > and set the right one in the path > > thx > > Alejandro > (phone typing) > > On Aug 12, 2013, at 17:47, Konstantin Shvachko > wrote: > > > Ok. After installing protobuf 2.5.0 I can compile trunk. > > But now I cannot compile Hadoop-2 branches. None of them. > > So if I switch between branches I need to reinstall protobuf? > > > > Is there a consensus about going towards protobuf 2.5.0 upgrade in ALL > > versions? > > I did not get definite impression there is. > > If not it could be a pretty big disruption. > > > > Thanks, > > --Konst > > > > > > > > On Mon, Aug 12, 2013 at 3:19 PM, Alejandro Abdelnur >wrote: > > > >> I've just committed HADOOP-9845 to trunk (only trunk at the moment). > >> > >> To build trunk now you need protoc 2.5.0 (the build will fail with a > >> warning if you don't have it). > >> > >> We'd propagate this to the 2 branches once the precommit build is back > to > >> normal and see things are OK. > >> > >> Thanks. > >> > >> > >> On Mon, Aug 12, 2013 at 2:57 PM, Alejandro Abdelnur >>> wrote: > >> > >>> About to commit HADOOP-9845 to trunk, in 5 mins. This will make trunk > use > >>> protoc 2.5.0. > >>> > >>> thx > >>> > >>> > >>> On Mon, Aug 12, 2013 at 11:47 AM, Giridharan Kesavan < > >>> gkesa...@hortonworks.com> wrote: > >>> > >>>> I can take care of re-installing 2.4 and installing 2.5 in a different > >>>> location. This would fix 2.0 branch builds as well. > >>>> Thoughts? > >>>> > >>>> -Giri > >>>> > >>>> > >>>> On Mon, Aug 12, 2013 at 11:37 AM, Alejandro Abdelnur < > t...@cloudera.com > >>>>> wrote: > >>>> > >>>>> Giri, > >>>>> > >>>>> first of all, thanks for installing protoc 2.5.0. > >>>>> > >>>>> I didn't know we were installing them as the only version and not > >>>> driven by > >>>>> env/path settings. > >>>>> > >>>>> Now we have a bit of a problem, precommit builds are broken because > of > >>>>> mismatch of protoc (2.5.0) and protobuf JAR( 2.4.1). > >>>>> > >>>>> We have to options: > >>>>> > >>>>> 1* commit HADOOP-9845 that will bring protobuf to 2.5.0 and iron out > >> any > >>>>> follow up issues. > >>>>> 2* reinstall protoc 2.4.1 in the jenkins machines and have 2.4.1 and > >>>> 2.5.0 > >>>>> coexisting > >>>>> > >>>>> My take would be to commit HADOOP-9845 in trunk, iron out any issues > >> an > >>>>> then merge it to the other branches. > >>>>> > >>>>> We need to sort this out quickly as precommits are not working. > >>>>> > >>>>> I'll wait till 3PM today for objections to option #1, if none I'll > >>>> commit > >>>>> it to trunk. > >>>>> > >>>>> Thanks. > >>>>> > >>>>> Alejandro > >>>>> > >>>>> > >>>>> > >>>>> On Mon, Aug 12, 2013 at 11:30 AM, Giridharan Kesavan < > >>>>> gkesa...@hortonworks.com> wrote: > >>>>> > >>>>>> Like I said protoc is upgraded from 2.4 to 2.5. 2.5 is in the > >> default > >>>>> path. > >>>>>> If we still need 2.4 I may have to install it. Let me know > >>>>>> > >>>>>> -Giri > >>>>>> > >>>>>> > >>>>>> On Sat, Aug 10, 2013 at 7:01 AM,
Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
shooting to get it i n for 2.1.0. at moment is in trunk till the nightly finishes. then we'll decide in the mean time, you can have multiple versions installed in diff dirs and set the right one in the path thx Alejandro (phone typing) On Aug 12, 2013, at 17:47, Konstantin Shvachko wrote: > Ok. After installing protobuf 2.5.0 I can compile trunk. > But now I cannot compile Hadoop-2 branches. None of them. > So if I switch between branches I need to reinstall protobuf? > > Is there a consensus about going towards protobuf 2.5.0 upgrade in ALL > versions? > I did not get definite impression there is. > If not it could be a pretty big disruption. > > Thanks, > --Konst > > > > On Mon, Aug 12, 2013 at 3:19 PM, Alejandro Abdelnur wrote: > >> I've just committed HADOOP-9845 to trunk (only trunk at the moment). >> >> To build trunk now you need protoc 2.5.0 (the build will fail with a >> warning if you don't have it). >> >> We'd propagate this to the 2 branches once the precommit build is back to >> normal and see things are OK. >> >> Thanks. >> >> >> On Mon, Aug 12, 2013 at 2:57 PM, Alejandro Abdelnur >> wrote: >> >>> About to commit HADOOP-9845 to trunk, in 5 mins. This will make trunk use >>> protoc 2.5.0. >>> >>> thx >>> >>> >>> On Mon, Aug 12, 2013 at 11:47 AM, Giridharan Kesavan < >>> gkesa...@hortonworks.com> wrote: >>> >>>> I can take care of re-installing 2.4 and installing 2.5 in a different >>>> location. This would fix 2.0 branch builds as well. >>>> Thoughts? >>>> >>>> -Giri >>>> >>>> >>>> On Mon, Aug 12, 2013 at 11:37 AM, Alejandro Abdelnur >>>> wrote: >>>> >>>>> Giri, >>>>> >>>>> first of all, thanks for installing protoc 2.5.0. >>>>> >>>>> I didn't know we were installing them as the only version and not >>>> driven by >>>>> env/path settings. >>>>> >>>>> Now we have a bit of a problem, precommit builds are broken because of >>>>> mismatch of protoc (2.5.0) and protobuf JAR( 2.4.1). >>>>> >>>>> We have to options: >>>>> >>>>> 1* commit HADOOP-9845 that will bring protobuf to 2.5.0 and iron out >> any >>>>> follow up issues. >>>>> 2* reinstall protoc 2.4.1 in the jenkins machines and have 2.4.1 and >>>> 2.5.0 >>>>> coexisting >>>>> >>>>> My take would be to commit HADOOP-9845 in trunk, iron out any issues >> an >>>>> then merge it to the other branches. >>>>> >>>>> We need to sort this out quickly as precommits are not working. >>>>> >>>>> I'll wait till 3PM today for objections to option #1, if none I'll >>>> commit >>>>> it to trunk. >>>>> >>>>> Thanks. >>>>> >>>>> Alejandro >>>>> >>>>> >>>>> >>>>> On Mon, Aug 12, 2013 at 11:30 AM, Giridharan Kesavan < >>>>> gkesa...@hortonworks.com> wrote: >>>>> >>>>>> Like I said protoc is upgraded from 2.4 to 2.5. 2.5 is in the >> default >>>>> path. >>>>>> If we still need 2.4 I may have to install it. Let me know >>>>>> >>>>>> -Giri >>>>>> >>>>>> >>>>>> On Sat, Aug 10, 2013 at 7:01 AM, Alejandro Abdelnur < >>>> t...@cloudera.com >>>>>>> wrote: >>>>>> >>>>>>> thanks giri, how do we set 2.4 or 2.5., what is the path to both >> so >>>> we >>>>>> can >>>>>>> use and env to set it in the jobs? >>>>>>> >>>>>>> thx >>>>>>> >>>>>>> Alejandro >>>>>>> (phone typing) >>>>>>> >>>>>>> On Aug 9, 2013, at 23:10, Giridharan Kesavan < >>>> gkesa...@hortonworks.com >>>>>> >>>>>>> wrote: >>>>>>> >>>>>>>> build slaves hadoop1-hadoop9 now has libprotoc 2.5.0 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>
Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
I've just committed HADOOP-9845 to trunk (only trunk at the moment). To build trunk now you need protoc 2.5.0 (the build will fail with a warning if you don't have it). We'd propagate this to the 2 branches once the precommit build is back to normal and see things are OK. Thanks. On Mon, Aug 12, 2013 at 2:57 PM, Alejandro Abdelnur wrote: > About to commit HADOOP-9845 to trunk, in 5 mins. This will make trunk use > protoc 2.5.0. > > thx > > > On Mon, Aug 12, 2013 at 11:47 AM, Giridharan Kesavan < > gkesa...@hortonworks.com> wrote: > >> I can take care of re-installing 2.4 and installing 2.5 in a different >> location. This would fix 2.0 branch builds as well. >> Thoughts? >> >> -Giri >> >> >> On Mon, Aug 12, 2013 at 11:37 AM, Alejandro Abdelnur > >wrote: >> >> > Giri, >> > >> > first of all, thanks for installing protoc 2.5.0. >> > >> > I didn't know we were installing them as the only version and not >> driven by >> > env/path settings. >> > >> > Now we have a bit of a problem, precommit builds are broken because of >> > mismatch of protoc (2.5.0) and protobuf JAR( 2.4.1). >> > >> > We have to options: >> > >> > 1* commit HADOOP-9845 that will bring protobuf to 2.5.0 and iron out any >> > follow up issues. >> > 2* reinstall protoc 2.4.1 in the jenkins machines and have 2.4.1 and >> 2.5.0 >> > coexisting >> > >> > My take would be to commit HADOOP-9845 in trunk, iron out any issues an >> > then merge it to the other branches. >> > >> > We need to sort this out quickly as precommits are not working. >> > >> > I'll wait till 3PM today for objections to option #1, if none I'll >> commit >> > it to trunk. >> > >> > Thanks. >> > >> > Alejandro >> > >> > >> > >> > On Mon, Aug 12, 2013 at 11:30 AM, Giridharan Kesavan < >> > gkesa...@hortonworks.com> wrote: >> > >> > > Like I said protoc is upgraded from 2.4 to 2.5. 2.5 is in the default >> > path. >> > > If we still need 2.4 I may have to install it. Let me know >> > > >> > > -Giri >> > > >> > > >> > > On Sat, Aug 10, 2013 at 7:01 AM, Alejandro Abdelnur < >> t...@cloudera.com >> > > >wrote: >> > > >> > > > thanks giri, how do we set 2.4 or 2.5., what is the path to both so >> we >> > > can >> > > > use and env to set it in the jobs? >> > > > >> > > > thx >> > > > >> > > > Alejandro >> > > > (phone typing) >> > > > >> > > > On Aug 9, 2013, at 23:10, Giridharan Kesavan < >> gkesa...@hortonworks.com >> > > >> > > > wrote: >> > > > >> > > > > build slaves hadoop1-hadoop9 now has libprotoc 2.5.0 >> > > > > >> > > > > >> > > > > >> > > > > -Giri >> > > > > >> > > > > >> > > > > On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan < >> > > > > gkesa...@hortonworks.com> wrote: >> > > > > >> > > > >> Alejandro, >> > > > >> >> > > > >> I'm upgrading protobuf on slaves hadoop1-hadoop9. >> > > > >> >> > > > >> -Giri >> > > > >> >> > > > >> >> > > > >> On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur < >> > t...@cloudera.com >> > > > >wrote: >> > > > >> >> > > > >>> pinging again, I need help from somebody with sudo access to the >> > > hadoop >> > > > >>> jenkins boxes to do this or to get sudo access for a couple of >> > hours >> > > to >> > > > >>> set >> > > > >>> up myself. >> > > > >>> >> > > > >>> Please!!! >> > > > >>> >> > > > >>> thx >> > > > >>> >> > > > >>> >> > > > >>> On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur < >> > > t...@cloudera.com >> > > > >>>> wrote: >> > > > >>> >> > > > >>>> To
Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
About to commit HADOOP-9845 to trunk, in 5 mins. This will make trunk use protoc 2.5.0. thx On Mon, Aug 12, 2013 at 11:47 AM, Giridharan Kesavan < gkesa...@hortonworks.com> wrote: > I can take care of re-installing 2.4 and installing 2.5 in a different > location. This would fix 2.0 branch builds as well. > Thoughts? > > -Giri > > > On Mon, Aug 12, 2013 at 11:37 AM, Alejandro Abdelnur >wrote: > > > Giri, > > > > first of all, thanks for installing protoc 2.5.0. > > > > I didn't know we were installing them as the only version and not driven > by > > env/path settings. > > > > Now we have a bit of a problem, precommit builds are broken because of > > mismatch of protoc (2.5.0) and protobuf JAR( 2.4.1). > > > > We have to options: > > > > 1* commit HADOOP-9845 that will bring protobuf to 2.5.0 and iron out any > > follow up issues. > > 2* reinstall protoc 2.4.1 in the jenkins machines and have 2.4.1 and > 2.5.0 > > coexisting > > > > My take would be to commit HADOOP-9845 in trunk, iron out any issues an > > then merge it to the other branches. > > > > We need to sort this out quickly as precommits are not working. > > > > I'll wait till 3PM today for objections to option #1, if none I'll > commit > > it to trunk. > > > > Thanks. > > > > Alejandro > > > > > > > > On Mon, Aug 12, 2013 at 11:30 AM, Giridharan Kesavan < > > gkesa...@hortonworks.com> wrote: > > > > > Like I said protoc is upgraded from 2.4 to 2.5. 2.5 is in the default > > path. > > > If we still need 2.4 I may have to install it. Let me know > > > > > > -Giri > > > > > > > > > On Sat, Aug 10, 2013 at 7:01 AM, Alejandro Abdelnur > > >wrote: > > > > > > > thanks giri, how do we set 2.4 or 2.5., what is the path to both so > we > > > can > > > > use and env to set it in the jobs? > > > > > > > > thx > > > > > > > > Alejandro > > > > (phone typing) > > > > > > > > On Aug 9, 2013, at 23:10, Giridharan Kesavan < > gkesa...@hortonworks.com > > > > > > > wrote: > > > > > > > > > build slaves hadoop1-hadoop9 now has libprotoc 2.5.0 > > > > > > > > > > > > > > > > > > > > -Giri > > > > > > > > > > > > > > > On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan < > > > > > gkesa...@hortonworks.com> wrote: > > > > > > > > > >> Alejandro, > > > > >> > > > > >> I'm upgrading protobuf on slaves hadoop1-hadoop9. > > > > >> > > > > >> -Giri > > > > >> > > > > >> > > > > >> On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur < > > t...@cloudera.com > > > > >wrote: > > > > >> > > > > >>> pinging again, I need help from somebody with sudo access to the > > > hadoop > > > > >>> jenkins boxes to do this or to get sudo access for a couple of > > hours > > > to > > > > >>> set > > > > >>> up myself. > > > > >>> > > > > >>> Please!!! > > > > >>> > > > > >>> thx > > > > >>> > > > > >>> > > > > >>> On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur < > > > t...@cloudera.com > > > > >>>> wrote: > > > > >>> > > > > >>>> To move forward with this we need protoc 2.5.0 in the apache > > hadoop > > > > >>>> jenkins boxes. > > > > >>>> > > > > >>>> Who can help with this? I assume somebody at Y!, right? > > > > >>>> > > > > >>>> Thx > > > > >>>> > > > > >>>> > > > > >>>> On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark < > ecl...@apache.org> > > > > >>> wrote: > > > > >>>> > > > > >>>>> In HBase land we've pretty well discovered that we'll need to > > have > > > > the > > > > >>>>> same version of protobuf that the HDFS/Yarn/MR servers are > > running. > > &
Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
Giri, first of all, thanks for installing protoc 2.5.0. I didn't know we were installing them as the only version and not driven by env/path settings. Now we have a bit of a problem, precommit builds are broken because of mismatch of protoc (2.5.0) and protobuf JAR( 2.4.1). We have to options: 1* commit HADOOP-9845 that will bring protobuf to 2.5.0 and iron out any follow up issues. 2* reinstall protoc 2.4.1 in the jenkins machines and have 2.4.1 and 2.5.0 coexisting My take would be to commit HADOOP-9845 in trunk, iron out any issues an then merge it to the other branches. We need to sort this out quickly as precommits are not working. I'll wait till 3PM today for objections to option #1, if none I'll commit it to trunk. Thanks. Alejandro On Mon, Aug 12, 2013 at 11:30 AM, Giridharan Kesavan < gkesa...@hortonworks.com> wrote: > Like I said protoc is upgraded from 2.4 to 2.5. 2.5 is in the default path. > If we still need 2.4 I may have to install it. Let me know > > -Giri > > > On Sat, Aug 10, 2013 at 7:01 AM, Alejandro Abdelnur >wrote: > > > thanks giri, how do we set 2.4 or 2.5., what is the path to both so we > can > > use and env to set it in the jobs? > > > > thx > > > > Alejandro > > (phone typing) > > > > On Aug 9, 2013, at 23:10, Giridharan Kesavan > > wrote: > > > > > build slaves hadoop1-hadoop9 now has libprotoc 2.5.0 > > > > > > > > > > > > -Giri > > > > > > > > > On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan < > > > gkesa...@hortonworks.com> wrote: > > > > > >> Alejandro, > > >> > > >> I'm upgrading protobuf on slaves hadoop1-hadoop9. > > >> > > >> -Giri > > >> > > >> > > >> On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur > >wrote: > > >> > > >>> pinging again, I need help from somebody with sudo access to the > hadoop > > >>> jenkins boxes to do this or to get sudo access for a couple of hours > to > > >>> set > > >>> up myself. > > >>> > > >>> Please!!! > > >>> > > >>> thx > > >>> > > >>> > > >>> On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur < > t...@cloudera.com > > >>>> wrote: > > >>> > > >>>> To move forward with this we need protoc 2.5.0 in the apache hadoop > > >>>> jenkins boxes. > > >>>> > > >>>> Who can help with this? I assume somebody at Y!, right? > > >>>> > > >>>> Thx > > >>>> > > >>>> > > >>>> On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark > > >>> wrote: > > >>>> > > >>>>> In HBase land we've pretty well discovered that we'll need to have > > the > > >>>>> same version of protobuf that the HDFS/Yarn/MR servers are running. > > >>>>> That is to say there are issues with ever having 2.4.x and 2.5.x on > > >>>>> the same class path. > > >>>>> > > >>>>> Upgrading to 2.5.x would be great, as it brings some new classes we > > >>>>> could use. With that said HBase is getting pretty close to a > rather > > >>>>> large release (0.96.0 aka The Singularity) so getting this in > sooner > > >>>>> rather than later would be great. If we could get this into 2.1.0 > it > > >>>>> would be great as that would allow us to have a pretty easy story > to > > >>>>> users with regards to protobuf version. > > >>>>> > > >>>>> On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee > > >>> wrote: > > >>>>>> Sorry to hijack the thread but, I also wanted to mention Avro. See > > >>>>> HADOOP-9672. > > >>>>>> The version we are using has memory leak and inefficiency issues. > > >>> We've > > >>>>> seen users running into it. > > >>>>>> > > >>>>>> Kihwal > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>>> From: Tsuyoshi OZAWA > > >>>>>> To: "common-...@hadoop.apache.org" > > >>>>>> Cc: "hdfs-...@hadoop.apache.org&q
Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
thanks giri, how do we set 2.4 or 2.5., what is the path to both so we can use and env to set it in the jobs? thx Alejandro (phone typing) On Aug 9, 2013, at 23:10, Giridharan Kesavan wrote: > build slaves hadoop1-hadoop9 now has libprotoc 2.5.0 > > > > -Giri > > > On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan < > gkesa...@hortonworks.com> wrote: > >> Alejandro, >> >> I'm upgrading protobuf on slaves hadoop1-hadoop9. >> >> -Giri >> >> >> On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur wrote: >> >>> pinging again, I need help from somebody with sudo access to the hadoop >>> jenkins boxes to do this or to get sudo access for a couple of hours to >>> set >>> up myself. >>> >>> Please!!! >>> >>> thx >>> >>> >>> On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur >>> wrote: >>> >>>> To move forward with this we need protoc 2.5.0 in the apache hadoop >>>> jenkins boxes. >>>> >>>> Who can help with this? I assume somebody at Y!, right? >>>> >>>> Thx >>>> >>>> >>>> On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark >>> wrote: >>>> >>>>> In HBase land we've pretty well discovered that we'll need to have the >>>>> same version of protobuf that the HDFS/Yarn/MR servers are running. >>>>> That is to say there are issues with ever having 2.4.x and 2.5.x on >>>>> the same class path. >>>>> >>>>> Upgrading to 2.5.x would be great, as it brings some new classes we >>>>> could use. With that said HBase is getting pretty close to a rather >>>>> large release (0.96.0 aka The Singularity) so getting this in sooner >>>>> rather than later would be great. If we could get this into 2.1.0 it >>>>> would be great as that would allow us to have a pretty easy story to >>>>> users with regards to protobuf version. >>>>> >>>>> On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee >>> wrote: >>>>>> Sorry to hijack the thread but, I also wanted to mention Avro. See >>>>> HADOOP-9672. >>>>>> The version we are using has memory leak and inefficiency issues. >>> We've >>>>> seen users running into it. >>>>>> >>>>>> Kihwal >>>>>> >>>>>> >>>>>> >>>>>> From: Tsuyoshi OZAWA >>>>>> To: "common-...@hadoop.apache.org" >>>>>> Cc: "hdfs-...@hadoop.apache.org" ; " >>>>> yarn-...@hadoop.apache.org" ; " >>>>> mapreduce-dev@hadoop.apache.org" >>>>>> Sent: Thursday, August 8, 2013 1:59 AM >>>>>> Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, >>>>> HADOOP-9845 >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> About Hadoop, Harsh is dealing with this problem in HADOOP-9346. >>>>>> For more detail, please see the JIRA ticket: >>>>>> https://issues.apache.org/jira/browse/HADOOP-9346 >>>>>> >>>>>> - Tsuyoshi >>>>>> >>>>>> On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur < >>> t...@cloudera.com> >>>>> wrote: >>>>>>> I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release. >>>>>>> >>>>>>> As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits >>> to >>>>>>> justify the upgrade. >>>>>>> >>>>>>> Doing the upgrade now, with the first beta, will make things easier >>> for >>>>>>> downstream projects (like HBase) using protobuf and adopting Hadoop >>> 2. >>>>> If >>>>>>> we do the upgrade later, downstream projects will have to support 2 >>>>>>> different versions and they my get in nasty waters due to classpath >>>>> issues. >>>>>>> >>>>>>> I've locally tested the patch in a pseudo deployment of 2.1.0-beta >>>>> branch >>>>>>> and it works fine (something is broken in trunk in the RPC layer >>>>> YARN-885). >>>>>>> >>>>>>> Now, to do this it will require a few things: >>>>>>> >>>>>>> * Make sure protobuf 2.5.0 is available in the jenkins box >>>>>>> * A follow up email to dev@ aliases indicating developers should >>>>> install >>>>>>> locally protobuf 2.5.0 >>>>>>> >>>>>>> Thanks. >>>>>>> >>>>>>> -- >>>>>>> Alejandro >>>> >>>> >>>> >>>> -- >>>> Alejandro >>> >>> >>> >>> -- >>> Alejandro >> >>
Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
pinging again, I need help from somebody with sudo access to the hadoop jenkins boxes to do this or to get sudo access for a couple of hours to set up myself. Please!!! thx On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur wrote: > To move forward with this we need protoc 2.5.0 in the apache hadoop > jenkins boxes. > > Who can help with this? I assume somebody at Y!, right? > > Thx > > > On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark wrote: > >> In HBase land we've pretty well discovered that we'll need to have the >> same version of protobuf that the HDFS/Yarn/MR servers are running. >> That is to say there are issues with ever having 2.4.x and 2.5.x on >> the same class path. >> >> Upgrading to 2.5.x would be great, as it brings some new classes we >> could use. With that said HBase is getting pretty close to a rather >> large release (0.96.0 aka The Singularity) so getting this in sooner >> rather than later would be great. If we could get this into 2.1.0 it >> would be great as that would allow us to have a pretty easy story to >> users with regards to protobuf version. >> >> On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee wrote: >> > Sorry to hijack the thread but, I also wanted to mention Avro. See >> HADOOP-9672. >> > The version we are using has memory leak and inefficiency issues. We've >> seen users running into it. >> > >> > Kihwal >> > >> > >> > >> > From: Tsuyoshi OZAWA >> > To: "common-...@hadoop.apache.org" >> > Cc: "hdfs-...@hadoop.apache.org" ; " >> yarn-...@hadoop.apache.org" ; " >> mapreduce-dev@hadoop.apache.org" >> > Sent: Thursday, August 8, 2013 1:59 AM >> > Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, >> HADOOP-9845 >> > >> > >> > Hi, >> > >> > About Hadoop, Harsh is dealing with this problem in HADOOP-9346. >> > For more detail, please see the JIRA ticket: >> > https://issues.apache.org/jira/browse/HADOOP-9346 >> > >> > - Tsuyoshi >> > >> > On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur >> wrote: >> >> I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release. >> >> >> >> As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits to >> >> justify the upgrade. >> >> >> >> Doing the upgrade now, with the first beta, will make things easier for >> >> downstream projects (like HBase) using protobuf and adopting Hadoop 2. >> If >> >> we do the upgrade later, downstream projects will have to support 2 >> >> different versions and they my get in nasty waters due to classpath >> issues. >> >> >> >> I've locally tested the patch in a pseudo deployment of 2.1.0-beta >> branch >> >> and it works fine (something is broken in trunk in the RPC layer >> YARN-885). >> >> >> >> Now, to do this it will require a few things: >> >> >> >> * Make sure protobuf 2.5.0 is available in the jenkins box >> >> * A follow up email to dev@ aliases indicating developers should >> install >> >> locally protobuf 2.5.0 >> >> >> >> Thanks. >> >> >> >> -- >> >> Alejandro >> > > > > -- > Alejandro > -- Alejandro
Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
To move forward with this we need protoc 2.5.0 in the apache hadoop jenkins boxes. Who can help with this? I assume somebody at Y!, right? Thx On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark wrote: > In HBase land we've pretty well discovered that we'll need to have the > same version of protobuf that the HDFS/Yarn/MR servers are running. > That is to say there are issues with ever having 2.4.x and 2.5.x on > the same class path. > > Upgrading to 2.5.x would be great, as it brings some new classes we > could use. With that said HBase is getting pretty close to a rather > large release (0.96.0 aka The Singularity) so getting this in sooner > rather than later would be great. If we could get this into 2.1.0 it > would be great as that would allow us to have a pretty easy story to > users with regards to protobuf version. > > On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee wrote: > > Sorry to hijack the thread but, I also wanted to mention Avro. See > HADOOP-9672. > > The version we are using has memory leak and inefficiency issues. We've > seen users running into it. > > > > Kihwal > > > > > > > > From: Tsuyoshi OZAWA > > To: "common-...@hadoop.apache.org" > > Cc: "hdfs-...@hadoop.apache.org" ; " > yarn-...@hadoop.apache.org" ; " > mapreduce-dev@hadoop.apache.org" > > Sent: Thursday, August 8, 2013 1:59 AM > > Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845 > > > > > > Hi, > > > > About Hadoop, Harsh is dealing with this problem in HADOOP-9346. > > For more detail, please see the JIRA ticket: > > https://issues.apache.org/jira/browse/HADOOP-9346 > > > > - Tsuyoshi > > > > On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur > wrote: > >> I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release. > >> > >> As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits to > >> justify the upgrade. > >> > >> Doing the upgrade now, with the first beta, will make things easier for > >> downstream projects (like HBase) using protobuf and adopting Hadoop 2. > If > >> we do the upgrade later, downstream projects will have to support 2 > >> different versions and they my get in nasty waters due to classpath > issues. > >> > >> I've locally tested the patch in a pseudo deployment of 2.1.0-beta > branch > >> and it works fine (something is broken in trunk in the RPC layer > YARN-885). > >> > >> Now, to do this it will require a few things: > >> > >> * Make sure protobuf 2.5.0 is available in the jenkins box > >> * A follow up email to dev@ aliases indicating developers should > install > >> locally protobuf 2.5.0 > >> > >> Thanks. > >> > >> -- > >> Alejandro > -- Alejandro
Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845
I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release. As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits to justify the upgrade. Doing the upgrade now, with the first beta, will make things easier for downstream projects (like HBase) using protobuf and adopting Hadoop 2. If we do the upgrade later, downstream projects will have to support 2 different versions and they my get in nasty waters due to classpath issues. I've locally tested the patch in a pseudo deployment of 2.1.0-beta branch and it works fine (something is broken in trunk in the RPC layer YARN-885). Now, to do this it will require a few things: * Make sure protobuf 2.5.0 is available in the jenkins box * A follow up email to dev@ aliases indicating developers should install locally protobuf 2.5.0 Thanks. -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.1.0-beta
Thanks Arun, +1 * verified MD5 & signature of source tarball. * built from source tarball * run apache-rat:check on source * installed pseudo cluster (unsecure) * test httpfs * run pi example * run unmanaged AM application Minor NITs (in case we do a new RC): * remove 2.1.1.-beta section from all CHANGES.txt files * the following hidden files shouldn't be in the source file (nor in SVN): ./.gitattributes (in SVN is OK) ./hadoop-common-project/hadoop-common/.eclipse.templates ./hadoop-common-project/hadoop-common/.eclipse.templates/.externalToolBuilders ./hadoop-hdfs-project/hadoop-hdfs/.eclipse.templates ./hadoop-hdfs-project/hadoop-hdfs/.eclipse.templates/.launches ./hadoop-mapreduce-project/.eclipse.templates ./hadoop-mapreduce-project/.eclipse.templates/.launches ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps On Tue, Jul 30, 2013 at 6:29 AM, Arun C Murthy wrote: > Folks, > > I've created another release candidate (rc1) for hadoop-2.1.0-beta that I > would like to get released. This RC fixes a number of issues reported on > the previous candidate. > > This release represents a *huge* amount of work done by the community > (~650 fixes) which includes several major advances including: > # HDFS Snapshots > # Windows support > # YARN API stabilization > # MapReduce Binary Compatibility with hadoop-1.x > # Substantial amount of integration testing with rest of projects in the > ecosystem > > The RC is available at: > http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc1/ > The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc1 > > The maven artifacts are available via repository.apache.org. > > Please try the release and vote; the vote will run for the usual 7 days. > > thanks, > Arun > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/ > > > -- Alejandro
[jira] [Created] (MAPREDUCE-5426) MRAM fails to register to RM, AMRM token seems missing
Alejandro Abdelnur created MAPREDUCE-5426: - Summary: MRAM fails to register to RM, AMRM token seems missing Key: MAPREDUCE-5426 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5426 Project: Hadoop Map/Reduce Issue Type: Bug Components: applicationmaster Affects Versions: 2.1.0-beta Reporter: Alejandro Abdelnur Priority: Blocker Fix For: 2.1.0-beta trying to run the pi example in an unsecure pseudo cluster the job fails. It seems the AMRM token is MIA. The AM syslog have the following: {code} 2013-07-27 14:17:23,703 ERROR [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception while registering org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:176) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:95) at com.sun.proxy.$Proxy29.registerApplicationMaster(Unknown Source) at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:147) at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:107) at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:213) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:789) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:101) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1019) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:1394) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1390) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1323) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled. Available:[TOKEN] at org.apache.hadoop.ipc.Client.call(Client.java:1369) at org.apache.hadoop.ipc.Client.call(Client.java:1322) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy28.registerApplicationMaster(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:106) ... 22 more {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (MAPREDUCE-4366) mapred metrics shows negative count of waiting maps and reduces
[ https://issues.apache.org/jira/browse/MAPREDUCE-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved MAPREDUCE-4366. --- Resolution: Fixed Fix Version/s: 1.3.0 Hadoop Flags: Reviewed Thanks Sandy. Committed to branch-1. > mapred metrics shows negative count of waiting maps and reduces > --- > > Key: MAPREDUCE-4366 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4366 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobtracker >Affects Versions: 1.0.2 >Reporter: Thomas Graves >Assignee: Sandy Ryza > Fix For: 1.3.0 > > Attachments: MAPREDUCE-4366-branch-1-1.patch, > MAPREDUCE-4366-branch-1.patch > > > Negative waiting_maps and waiting_reduces count is observed in the mapred > metrics. MAPREDUCE-1238 partially fixed this but it appears there is still > issues as we are seeing it, but not as bad. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [VOTE] Release Apache Hadoop 2.1.0-beta
As I've mentioned in my previous email, if we get YARN-701 in, we should also get in the fix for unmanaged AMs in an un-secure setup in 2.1.0-beta. Else is a regression of a functionality it is already working. Because of that, to avoid continuing delaying the release, I'm suggesting to mention in the release notes the API changes and behavior changes that YARN-918 and YARN-701 will bring into the next beta or GA release. thx On Wed, Jul 17, 2013 at 4:14 PM, Vinod Kumar Vavilapalli < vino...@hortonworks.com> wrote: > > On Jul 17, 2013, at 1:04 PM, Alejandro Abdelnur wrote: > > > * YARN-701 > > > > It should be addressed before a GA release. > > > > Still, as it is this breaks unmanaged AMs and to me > > that would be a blocker for the beta. > > > > YARN-701 and the unmanaged AMs fix should be committed > > in tandem. > > > > * YARN-918 > > > > It is a consequence of YARN-701 and depends on it. > > > > YARN-918 is an API change. And YARN-701 is a behaviour change. We need > both in 2.1.0. > > > > > * YARN-926 > > > > It would be nice to have it addressed before GA release. > > > Either ways. I'd get it in sooner than later specifically when we are > trying to replace the old API with the new one. > > Thanks, > +Vino > >
Re: [VOTE] Release Apache Hadoop 2.1.0-beta
Vinod, Thanks for reviving this thread. The current blockers are: https://issues.apache.org/jira/issues/?jql=project%20in%20(hadoop%2C%20mapreduce%2C%20hdfs%2C%20yarn)%20and%20status%20in%20(open%2C%20'patch%20available')%20and%20priority%20%3D%20blocker%20and%20%22Target%20Version%2Fs%22%20%3D%20%222.1.0-beta%22 By looking at them I don't see they are necessary blockers for a beta release. * HADOOP-9688 & HADOOP-9698 They definitely have to be addressed before a GA release. * YARN-701 It should be addressed before a GA release. Still, as it is this breaks unmanaged AMs and to me that would be a blocker for the beta. YARN-701 and the unmanaged AMs fix should be committed in tandem. * YARN-918 It is a consequence of YARN-701 and depends on it. * YARN-926 It would be nice to have it addressed before GA release. We could do a beta with what we have at the moment in branch-2 and have a special release note indicating API changes coming in the next beta/GA release as part of YARN-918 & YARN-926. IMO, we should move forward with the beta release with the current state. Else we'll continue delaying it and adding more things that break/change things. Thanks. Alejandro On Wed, Jul 17, 2013 at 12:24 PM, Vinod Kumar Vavilapalli < vino...@hortonworks.com> wrote: > > Looks like this RC has gone stale and lots of bug fixes went into 2.1 and > 2.1.0 branches and there are 4-5 outstanding blockers. And from what I see > in CHANGES.txt files there seems to be a confusion about which branch to > get in what. > > I'm blowing off the current 2.1.0 release branch so that we can create a > fresh release branch and call voting on that. I'll fix CHANGES.txt entries > as well as JIRA fix version for bugs committed recently if there are > inconsistencies. > > Let me know if something is amiss while I do this. > > Thanks, > +Vinod > > On Jul 3, 2013, at 11:06 AM, Vinod Kumar Vavilapalli wrote: > > > > > We should get these in, looking at them now. > > > > Thanks, > > +Vinod > > > > On Jun 28, 2013, at 12:03 PM, Hitesh Shah wrote: > > > >> Hi Arun, > >> > >> From a YARN perspective, YARN-791 and YARN-727 are 2 jiras that may > potentially change the apis. They can implemented in a backward compat > fashion if committed after 2.1.0. However, this will require adding of > differently-named apis ( different urls in case of the webservices ) and > make the current version of the api deprecated and/or obsolete. YARN-818 > which is currently patch available also changes behavior. > >> > >> Assuming that as soon as 2.1.0 is released, we are to follow a very > strict backward-compat retaining approach to all user-facing layers ( > api/webservices/rpc/... ) in common/hdfs/yarn/mapreduce, does it make sense > to try and pull them in and roll out a new RC after they are ready? Perhaps > Vinod can chime in if he is aware of any other such jiras under YARN-386 > which should be considered compat-related blockers for a 2.1.0 RC. > >> > >> thanks > >> -- Hitesh > >> > >> On Jun 26, 2013, at 1:17 AM, Arun C Murthy wrote: > >> > >>> Folks, > >>> > >>> I've created a release candidate (rc0) for hadoop-2.1.0-beta that I > would like to get released. > >>> > >>> This release represents a *huge* amount of work done by the community > (639 fixes) which includes several major advances including: > >>> # HDFS Snapshots > >>> # Windows support > >>> # YARN API stabilization > >>> # MapReduce Binary Compatibility with hadoop-1.x > >>> # Substantial amount of integration testing with rest of projects in > the ecosystem > >>> > >>> The RC is available at: > http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc0/ > >>> The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc0 > >>> > >>> The maven artifacts are available via repository.apache.org. > >>> > >>> Please try the release and vote; the vote will run for the usual 7 > days. > >>> > >>> thanks, > >>> Arun > >>> > >>> -- > >>> Arun C. Murthy > >>> Hortonworks Inc. > >>> http://hortonworks.com/ > >>> > >>> > >> > > > > -- Alejandro
[jira] [Created] (MAPREDUCE-5362) clean up POM dependencies
Alejandro Abdelnur created MAPREDUCE-5362: - Summary: clean up POM dependencies Key: MAPREDUCE-5362 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5362 Project: Hadoop Map/Reduce Issue Type: Bug Components: build Affects Versions: 2.1.0-beta Reporter: Alejandro Abdelnur Intermediate 'pom' modules define dependencies inherited by leaf modules. This is causing issues in intellij IDE. We should normalize the leaf modules like in common, hdfs and tools where all dependencies are defined in each leaf module and the intermediate 'pom' module do not define any dependency. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Heads up: branch-2.1-beta
Have anybody from the HDFS side of things had a chance to look at the OOM that Roman was reporting? Thanks. On Wed, Jun 26, 2013 at 12:26 AM, Arun C Murthy wrote: > Ok, the last of the blockers is done. > > I'll roll an RC tonight. > > thanks, > Arun > > On Jun 20, 2013, at 2:40 PM, Arun C Murthy wrote: > > > I think I've shared this before, but here you go again… > > > > http://s.apache.org/hadoop-2.1.0-beta-blockers > > > > At this point, HADOOP-9421 seems like the most important. > > > > thanks, > > Arun > > > > On Jun 20, 2013, at 8:31 AM, Alejandro Abdelnur > wrote: > > > >> Arun, > >> > >> It seems there are still a few things to iron out before getting 2.1 > out of > >> the door. > >> > >> As RM for the release, would you mind sharing the current state of > things > >> and your estimate on when it could happen? > >> > >> Thanks. > >> > >> > >> On Wed, Jun 19, 2013 at 4:29 PM, Roman Shaposhnik > wrote: > >> > >>> On Tue, Jun 18, 2013 at 11:58 PM, Arun C Murthy > >>> wrote: > >>>> Ping. Any luck? > >>> > >>> Unfortunately I've ran into: > >>> https://issues.apache.org/jira/browse/HADOOP-9654 > >>> > >>> which was caused by the unrelated memory pressure on the NN, > >>> but it had an unfortunate side effect of making the rest of the > >>> testing stuck. I'll correct the problem now and re-run. > >>> > >>> Thanks, > >>> Roman. > >>> > >> > >> > >> > >> -- > >> Alejandro > > > > -- > > Arun C. Murthy > > Hortonworks Inc. > > http://hortonworks.com/ > > > > > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/ > > > -- Alejandro
Re: Heads up: branch-2.1-beta
Arun, It seems there are still a few things to iron out before getting 2.1 out of the door. As RM for the release, would you mind sharing the current state of things and your estimate on when it could happen? Thanks. On Wed, Jun 19, 2013 at 4:29 PM, Roman Shaposhnik wrote: > On Tue, Jun 18, 2013 at 11:58 PM, Arun C Murthy > wrote: > > Ping. Any luck? > > Unfortunately I've ran into: > https://issues.apache.org/jira/browse/HADOOP-9654 > > which was caused by the unrelated memory pressure on the NN, > but it had an unfortunate side effect of making the rest of the > testing stuck. I'll correct the problem now and re-run. > > Thanks, > Roman. > -- Alejandro
[jira] [Created] (MAPREDUCE-5333) Add test that verifies MRAM works correctly when sending requests with non-normalized capabilities
Alejandro Abdelnur created MAPREDUCE-5333: - Summary: Add test that verifies MRAM works correctly when sending requests with non-normalized capabilities Key: MAPREDUCE-5333 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5333 Project: Hadoop Map/Reduce Issue Type: Test Components: mr-am Affects Versions: 2.1.0-beta Reporter: Alejandro Abdelnur This is a follow on MAPREDUCE-5310 to ensure nothing broke after we removed normalization on the MRAM side. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Heads up: branch-2.1-beta
Arun, actually YARN-787 fixed the problem. Run those tests from trunk and branch-2.1-beta HEADs without issues. Without YARN-787 they fail. Thx On Sun, Jun 16, 2013 at 12:03 PM, Arun C Murthy wrote: > Alejandro, > > Can you please take a look at MAPREDUCE-5327? This is related to YARN-787. > > thanks, > Arun > > On Jun 16, 2013, at 8:56 AM, Alejandro Abdelnur wrote: > > > Thanks Arun, I'll take care of committing YARN-752 and MAPREDUCE-5171 > > around noon today (SUN noon PST). > > > > What is your take on YARN-791 & MAPREDUCE-5130? > > > > > > > > On Sun, Jun 16, 2013 at 7:02 AM, Arun C Murthy > wrote: > > > >> > >> On Jun 16, 2013, at 5:39 AM, Arun C Murthy wrote: > >> > >>> > >>> On Jun 15, 2013, at 8:19 AM, Alejandro Abdelnur wrote: > >>>> > >>>> Of the JIRAs in my laundry list for 2.1 the ones I would really want > in > >> are > >>>> YARN-752, MAPREDUCE-5171 & YARN-787. > >>> > >>> I agree YARN-787 needs to go in ASAP & is a blocker - I'm looking at it > >> right now. > >> > >> I committed YARN-787. Thanks. > >> > >> Arun > >> > >> > > > > > > -- > > Alejandro > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/ > > > -- Alejandro
[jira] [Resolved] (MAPREDUCE-5327) TestMRJobs and TestUberAM fail at verifying counters
[ https://issues.apache.org/jira/browse/MAPREDUCE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved MAPREDUCE-5327. --- Resolution: Invalid MAPREDUCE-5310 stopped setting the MIN in the local clusterinfo. That is why the tests were failing, because the slot-millis calculation was querying the MIN from the local clusterinfo. YARN-787 removed the MIN from cluster info and wired the MIN from configuration for the calculation of slot-millis. So, YARN-787 fixed what MAPREDUCE-5310 broke. I've just run those tests on the head of trunk and 2.1-beta and they are passing. Closing this one as invalid > TestMRJobs and TestUberAM fail at verifying counters > > > Key: MAPREDUCE-5327 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5327 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Zhijie Shen >Assignee: Alejandro Abdelnur >Priority: Critical > > See the test report in YARN-829 and YARN-830: > * https://builds.apache.org/job/PreCommit-YARN-Build/1269//testReport/ > * https://builds.apache.org/job/PreCommit-YARN-Build/1270//testReport/ > The failure seems to be related to: > {code} > Assert > .assertTrue(counters.findCounter(JobCounter.SLOTS_MILLIS_MAPS) != null > && counters.findCounter(JobCounter.SLOTS_MILLIS_MAPS).getValue() > != 0); > {code} > in TestMRJobs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Heads up: branch-2.1-beta
Thanks Arun, I'll take care of committing YARN-752 and MAPREDUCE-5171 around noon today (SUN noon PST). What is your take on YARN-791 & MAPREDUCE-5130? On Sun, Jun 16, 2013 at 7:02 AM, Arun C Murthy wrote: > > On Jun 16, 2013, at 5:39 AM, Arun C Murthy wrote: > > > > > On Jun 15, 2013, at 8:19 AM, Alejandro Abdelnur wrote: > >> > >> Of the JIRAs in my laundry list for 2.1 the ones I would really want in > are > >> YARN-752, MAPREDUCE-5171 & YARN-787. > > > > I agree YARN-787 needs to go in ASAP & is a blocker - I'm looking at it > right now. > > I committed YARN-787. Thanks. > > Arun > > -- Alejandro
Re: Heads up: branch-2.1-beta
If the intention is to get the release out in time for the Hadoop Summit we have a very tight schedule. Because the release vote runs for 7 days, we should have an RC latest Monday afternoon, and we should encourage folks to verify & vote ASAP, so if we need to cut a new RC we can do it on Tuesday. Another thing to consider is that if the changes on an RC are corrections that do not affect code, we could agree on not reseting the voting period clock if we need to cut a new RC (ie doc, build, notes changes). Of the JIRAs in my laundry list for 2.1 the ones I would really want in are YARN-752, MAPREDUCE-5171 & YARN-787. The first 2 are already +1ed, the last one needs to be reviewed. I have not committed the first 2 ones yet because I don't want to disrupt things for the folks doing QA. Arun, as you are coordinating the work for this release, please do commit them or give me the go ahead and I'll commit. Also, it would be great if you can review YARN-787 (as per discussions, the changes on the milli-slot calculations do not affect the current calculations, that would be left for MAPREDUCE-5311 to do). I'll be checking my email over the weekend and I can take care of some stuff if needed (while the monkeys sleep). Thx On Fri, Jun 14, 2013 at 9:56 PM, Alejandro Abdelnur wrote: > Following is a revisited assessment of JIRAs I would like to get in the > 2.1 release: > > From the 1st group I think all 3 should make. > > From the 2nd group I think YARN-791 should make it for sure and ideally > MAPREDUCE-5130. > > From the 3rd group, I don't think this JIRA will make it. > > From the 4th group, we don't need to worry about this or 2.1 > > Thanks > > Alejandro > > -- > JIRAs that are in shape to make it to 2.1 > > * YARN-752: In AMRMClient, automatically add corresponding rack requests > for requested nodes > > impact: behavior change > > status: patch avail, +1ed. > > * MAPREDUCE-5171: Expose blacklisted nodes from the MR AM REST API > > impact: Addition to MRAM HTTP API > > status: patch avail, +1ed, needs to be committed > > * YARN-787: Remove resource min from Yarn client API > > impact: Yarn client API change > > status: patch avail, needs to be reviewed. (the calculation of slot-millis > is not affected, the MIN is taken from conf for now) > > -- > JIRAs that require minor work to make it to 2.1 > > * YARN-521: Augment AM - RM client module to be able to request containers > only at specific locations > > impact: AMRM client API change > > status: patch not avail yet (requires YARN-752) > > * YARN-791: Ensure that RM RPC APIs that return nodes are consistent with > /nodes REST API > > impact: Yarn client API & proto change > > status: patch avail, review in progress > > * MAPREDUCE-5130: Add missing job config options to mapred-default.xml > > impact: behavior change > > status: patch avail but some tests are failing > > -- > JIRAs that require significant work to make it to 2.1 and may not make it > > * YARN-649: Make container logs available over HTTP in plain text > > impact: Addition to NM HTTP REST API. Needed for MAPREDUCE-4362 (which > does not change API) > > status: patch avail, review in progress > > -- > JIRAs that don't need to make it to 2.1 > > * MAPREDUCE-5311: Remove slot millis computation logic and deprecate > counter constants > > impact: behavior change > > status: per discussion we should first add memory-millis and vcores-millis > > -- > > > On Fri, Jun 14, 2013 at 7:17 PM, Roman Shaposhnik wrote: > >> On Thu, Jun 6, 2013 at 4:48 AM, Arun C Murthy >> wrote: >> > >> > On Jun 5, 2013, at 11:04 AM, Roman Shaposhnik wrote >> >> >> >> On the Bigtop side of things, once we have stable Bigtop 0.6.0 platform >> >> based on Hadoop 2.0.x codeline we plan to start running the same >> battery >> >> of integration tests on the branch-2.1-beta. >> >> >> >> We plan to simply file JIRAs if anything gets detected and I will also >> >> publish the URL of the Jenkins job once it gets created. >> > >> > Thanks Roman. Is there an ETA for this? Also, please file jiras with >> Blocker priority to catch attention. >> >> The build is up and running (and all green on all of the 9 Linux >> platforms!): >> http://bigtop01.cloudera.org:8080/job/Hadoop-2.1.0/ >> >> The immediate benefit here is that we get to see that the >> build is ok on all these Linuxes and all anybody can easily >> install packaged Hadoop 2.1.0 nightly builds. >> >> Starting from next week, I'll start running regular tests >> on these bits and will keep you guys posted! >> >> Thanks, >> Roman. >> > > > > -- > Alejandro > -- Alejandro
Re: Heads up: branch-2.1-beta
Following is a revisited assessment of JIRAs I would like to get in the 2.1 release: >From the 1st group I think all 3 should make. >From the 2nd group I think YARN-791 should make it for sure and ideally MAPREDUCE-5130. >From the 3rd group, I don't think this JIRA will make it. >From the 4th group, we don't need to worry about this or 2.1 Thanks Alejandro -- JIRAs that are in shape to make it to 2.1 * YARN-752: In AMRMClient, automatically add corresponding rack requests for requested nodes impact: behavior change status: patch avail, +1ed. * MAPREDUCE-5171: Expose blacklisted nodes from the MR AM REST API impact: Addition to MRAM HTTP API status: patch avail, +1ed, needs to be committed * YARN-787: Remove resource min from Yarn client API impact: Yarn client API change status: patch avail, needs to be reviewed. (the calculation of slot-millis is not affected, the MIN is taken from conf for now) -- JIRAs that require minor work to make it to 2.1 * YARN-521: Augment AM - RM client module to be able to request containers only at specific locations impact: AMRM client API change status: patch not avail yet (requires YARN-752) * YARN-791: Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API impact: Yarn client API & proto change status: patch avail, review in progress * MAPREDUCE-5130: Add missing job config options to mapred-default.xml impact: behavior change status: patch avail but some tests are failing -- JIRAs that require significant work to make it to 2.1 and may not make it * YARN-649: Make container logs available over HTTP in plain text impact: Addition to NM HTTP REST API. Needed for MAPREDUCE-4362 (which does not change API) status: patch avail, review in progress -- JIRAs that don't need to make it to 2.1 * MAPREDUCE-5311: Remove slot millis computation logic and deprecate counter constants impact: behavior change status: per discussion we should first add memory-millis and vcores-millis -- On Fri, Jun 14, 2013 at 7:17 PM, Roman Shaposhnik wrote: > On Thu, Jun 6, 2013 at 4:48 AM, Arun C Murthy wrote: > > > > On Jun 5, 2013, at 11:04 AM, Roman Shaposhnik wrote > >> > >> On the Bigtop side of things, once we have stable Bigtop 0.6.0 platform > >> based on Hadoop 2.0.x codeline we plan to start running the same battery > >> of integration tests on the branch-2.1-beta. > >> > >> We plan to simply file JIRAs if anything gets detected and I will also > >> publish the URL of the Jenkins job once it gets created. > > > > Thanks Roman. Is there an ETA for this? Also, please file jiras with > Blocker priority to catch attention. > > The build is up and running (and all green on all of the 9 Linux > platforms!): > http://bigtop01.cloudera.org:8080/job/Hadoop-2.1.0/ > > The immediate benefit here is that we get to see that the > build is ok on all these Linuxes and all anybody can easily > install packaged Hadoop 2.1.0 nightly builds. > > Starting from next week, I'll start running regular tests > on these bits and will keep you guys posted! > > Thanks, > Roman. > -- Alejandro
Re: Heads up: branch-2.1-beta
Arun Forgot to make it explicit in previous email, I'll be happy to help so this is done ASAP. Please, let me know how you want to proceed Thx On Fri, Jun 14, 2013 at 2:02 PM, Alejandro Abdelnur wrote: > Arun, > > This sounds great. Following is the list of JIRAs I'd like to get in. Note > that the are ready or almost ready, my estimate is that they can be taken > care of in a couple of day. > > Thanks. > > * YARN-752: In AMRMClient, automatically add corresponding rack requests > for requested nodes > > impact: behavior change > > status: patch avail, reviewed by Bikas. As Bikas did some changes it needs > another committer to look at it. > > * YARN-521: Augment AM - RM client module to be able to request containers > only at specific locations > > impact: AMRM client API change > > status: patch avail, needs to be reviewed, needs YARN-752 > > * YARN-791: Ensure that RM RPC APIs that return nodes are consistent with > /nodes REST API > > impact: Yarn client API & proto change > > status: patch avail, review in progress > > * YARN-649: Make container logs available over HTTP in plain text > > impact: Addition to NM HTTP REST API. Needed for MAPREDUCE-4362 (which > does not change API) > > status: patch avail, review in progress > > * MAPREDUCE-5171: Expose blacklisted nodes from the MR AM REST API > > impact: Addition to MRAM HTTP API > > status: patch avail, +1ed, needs to be committed > > * MAPREDUCE-5130: Add missing job config options to mapred-default.xml > > impact: behavior change > > status: patch avail, needs to be reviewed > > * MAPREDUCE-5311: Remove slot millis computation logic and deprecate > counter constants > > impact: behavior change > > status: patch avail, needs to be reviewed > > * YARN-787: Remove resource min from Yarn client API > > impact: Yarn client API change > > status: patch needs rebase, depends on MAPREDUCE-5311 > > > > > On Fri, Jun 14, 2013 at 1:17 PM, Arun C Murthy wrote: > >> As Ramya noted, things are looking good on branch-2.1-beta ATM. >> >> Henceforth, can I please ask committers to hold off non-blocker fixes for >> the final set of tests? >> >> thanks, >> Arun >> >> On Jun 4, 2013, at 8:32 AM, Arun C Murthy wrote: >> >> > Folks, >> > >> > The vast majority of of the planned features and API work is complete, >> thanks to everyone who contributed! >> > >> > I've created a branch-2.1-beta branch from which I anticipate I can >> make the first of our beta releases very shortly. >> > >> > For now the remaining work is to wrap up loose ends i.e. last minute >> api work (e.g. YARN-759 showed up last night for consideration), bug-fixes >> etc.; then run this through a battery of unit/system/integration tests and >> do a final review before we ship. There is more work remaining on >> documentation (e.g. HADOOP-9517) and I plan to personally focus on it this >> week - obviously help reviewing docs is very welcome. >> > >> > Committers, from now, please please exercise your judgement on where >> you commit. Typically, features should go into branch-2 with 2.3.0 as the >> version on jira (fix-version 2.3.0 is ready). The expectation is that 2.2.0 >> will be limited to content in branch-2.1-beta and we stick to stabilizing >> it henceforth (I've deliberately not created 2.2.0 fix-version on jira yet). >> > >> > thanks, >> > Arun >> >> -- >> Arun C. Murthy >> Hortonworks Inc. >> http://hortonworks.com/ >> >> >> > > > -- > Alejandro > -- Alejandro
Re: Heads up: branch-2.1-beta
Arun, This sounds great. Following is the list of JIRAs I'd like to get in. Note that the are ready or almost ready, my estimate is that they can be taken care of in a couple of day. Thanks. * YARN-752: In AMRMClient, automatically add corresponding rack requests for requested nodes impact: behavior change status: patch avail, reviewed by Bikas. As Bikas did some changes it needs another committer to look at it. * YARN-521: Augment AM - RM client module to be able to request containers only at specific locations impact: AMRM client API change status: patch avail, needs to be reviewed, needs YARN-752 * YARN-791: Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API impact: Yarn client API & proto change status: patch avail, review in progress * YARN-649: Make container logs available over HTTP in plain text impact: Addition to NM HTTP REST API. Needed for MAPREDUCE-4362 (which does not change API) status: patch avail, review in progress * MAPREDUCE-5171: Expose blacklisted nodes from the MR AM REST API impact: Addition to MRAM HTTP API status: patch avail, +1ed, needs to be committed * MAPREDUCE-5130: Add missing job config options to mapred-default.xml impact: behavior change status: patch avail, needs to be reviewed * MAPREDUCE-5311: Remove slot millis computation logic and deprecate counter constants impact: behavior change status: patch avail, needs to be reviewed * YARN-787: Remove resource min from Yarn client API impact: Yarn client API change status: patch needs rebase, depends on MAPREDUCE-5311 On Fri, Jun 14, 2013 at 1:17 PM, Arun C Murthy wrote: > As Ramya noted, things are looking good on branch-2.1-beta ATM. > > Henceforth, can I please ask committers to hold off non-blocker fixes for > the final set of tests? > > thanks, > Arun > > On Jun 4, 2013, at 8:32 AM, Arun C Murthy wrote: > > > Folks, > > > > The vast majority of of the planned features and API work is complete, > thanks to everyone who contributed! > > > > I've created a branch-2.1-beta branch from which I anticipate I can make > the first of our beta releases very shortly. > > > > For now the remaining work is to wrap up loose ends i.e. last minute api > work (e.g. YARN-759 showed up last night for consideration), bug-fixes > etc.; then run this through a battery of unit/system/integration tests and > do a final review before we ship. There is more work remaining on > documentation (e.g. HADOOP-9517) and I plan to personally focus on it this > week - obviously help reviewing docs is very welcome. > > > > Committers, from now, please please exercise your judgement on where you > commit. Typically, features should go into branch-2 with 2.3.0 as the > version on jira (fix-version 2.3.0 is ready). The expectation is that 2.2.0 > will be limited to content in branch-2.1-beta and we stick to stabilizing > it henceforth (I've deliberately not created 2.2.0 fix-version on jira yet). > > > > thanks, > > Arun > > -- > Arun C. Murthy > Hortonworks Inc. > http://hortonworks.com/ > > > -- Alejandro
[jira] [Created] (MAPREDUCE-5312) TestRMNMInfo is failing
Alejandro Abdelnur created MAPREDUCE-5312: - Summary: TestRMNMInfo is failing Key: MAPREDUCE-5312 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5312 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur 2 test methods are failing: {code} Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.904 sec <<< FAILURE! testRMNMInfo(org.apache.hadoop.mapreduce.v2.TestRMNMInfo) Time elapsed: 198 sec <<< ERROR! java.lang.NullPointerException at org.apache.hadoop.mapreduce.v2.TestRMNMInfo.testRMNMInfo(TestRMNMInfo.java:121) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75) testRMNMInfoMissmatch(org.apache.hadoop.mapreduce.v2.TestRMNMInfo) Time elapsed: 146 sec <<< ERROR! java.lang.NullPointerException at org.apache.hadoop.mapreduce.v2.TestRMNMInfo.testRMNMInfoMissmatch(TestRMNMInfo.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(Parent
[jira] [Created] (MAPREDUCE-5311) Remove slot millis computation logic and deprecate conters
Alejandro Abdelnur created MAPREDUCE-5311: - Summary: Remove slot millis computation logic and deprecate conters Key: MAPREDUCE-5311 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5311 Project: Hadoop Map/Reduce Issue Type: Bug Components: applicationmaster Affects Versions: 2.0.4-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Per discussion in MAPREDUCE-5310 and comments in the code we should remove all the related logic and just leave the counter constant for backwards compatibility and deprecate the constant. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (MAPREDUCE-5310) MRAM should not normalize allocation request capabilities
Alejandro Abdelnur created MAPREDUCE-5310: - Summary: MRAM should not normalize allocation request capabilities Key: MAPREDUCE-5310 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5310 Project: Hadoop Map/Reduce Issue Type: Bug Components: applicationmaster Affects Versions: 2.0.4-alpha Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur The MRAM is assuming knowledge of the scheduler internals to normalize allocation request capabilities. Per discussions in YARN-689 and YARN-769 it should not do that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (MAPREDUCE-5304) mapreduce.Job killTask/failTask/getTaskCompletionEvents methods have incompatible signature changes
Alejandro Abdelnur created MAPREDUCE-5304: - Summary: mapreduce.Job killTask/failTask/getTaskCompletionEvents methods have incompatible signature changes Key: MAPREDUCE-5304 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5304 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.4-alpha Reporter: Alejandro Abdelnur Assignee: Karthik Kambatla Priority: Blocker Fix For: 2.1.0-beta Pointed out by [~zjshen] in MAPREDUCE-4942. In {{o.a.h.mapreduce.Job}} class, the following changed from Hadoop 1 to Hadoop 2. boolean failTask(TaskAttemptID): Change in return type from void to boolean. boolean killTask(TaskAttemptID): Change in return type from void to boolean. TaskCompletionEvent[] getTaskCompletionEvents(int): Change in return type from org.apache.hadoop.mapred.TaskCompletionEvent[] to org.apache.hadoop.mapreduce.TaskCompletionEvent[]. Using same rational as in other JIRAs, we should fix this to ensure Hadoop 1 to Hadoop 2 source compatibility (taking 0.23.x releases as a casualty as there is not right way for everybody because we screwed up :( ). Flagging it as incompatible change because of 0.23. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [VOTE] Release Apache Hadoop 2.0.5-alpha (rc2)
+1 RC2. Verified MD5 & signature, checked CHANGES.txt files, built, configured pseudo cluster, run a couple of sample jobs, tested HTTPFS. On Mon, Jun 3, 2013 at 12:51 PM, Konstantin Boudnik wrote: > I have rolled out release candidate (rc2) for hadoop-2.0.5-alpha. > > The difference between rc1 and rc2 is the "optimistic release date" is set > for > 06/06/2013 in the CHANGES.txt files. > > The binary artifact is the same - there's no need to rebuild it. The maven > artifacts are the same. > > The difference between the two RCs: > > svn diff \ > > https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.5-alpha-rc1/\ > > https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.5-alpha-rc2/ > > New RC builds are uploaded to the web. > The RC is available at: > http://people.apache.org/~cos/hadoop-2.0.5-alpha-rc2/ > The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.5-alpha-rc2 > > I would like to extend the vote for another three days for it is such a > minor > change that doesn't affect anything but the recorded release date. Please > cast your vote before 06/06/2013 5pm PDT. > > Thanks for your patience! > Cos > > On Fri, May 31, 2013 at 09:27PM, Konstantin Boudnik wrote: > > All, > > > > I have created a release candidate (rc1) for hadoop-2.0.5-alpha that I > would > > like to release. > > > > This is a stabilization release that includes fixed for a couple a of > issues > > discovered in the testing with BigTop 0.6.0 release candidate. > > > > The RC is available at: > http://people.apache.org/~cos/hadoop-2.0.5-alpha-rc1/ > > The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.5-alpha-rc1 > > > > The maven artifacts will be available via repository.apache.org on Sat, > June > > 1st, 2013 at 2 pm PDT as outlined here > > http://s.apache.org/WKD > > > > Please try the release bits and vote; the vote will run for the 3 days, > > because this is just a version name change. The bits are identical to > the ones > > voted on before in > > http://s.apache.org/2041move > > > > Thanks for your voting > > Cos > > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.0.5-alpha
I know we had this issue before. But and easy way to solve it it would be using as release date the date the vote ends. Anyway, not a big deal if we are not cutting a new rc for other reason +1 on rc1 Thx On Jun 2, 2013, at 12:04 AM, Konstantin Boudnik wrote: > Alejandro, > > I believe this is chicken and egg problem: I can't put release date into > unreleased tarball. Looking into 2.0.4-alpha source tarball I see the same > situation: > > Release 2.0.4-alpha - UNRELEASED > > But in the trunk and branch-2 the release data is in place. I don't think this > is an issue. But I would be happy to fix it if this seems to be a problem. > > Thanks, > Cos > > On Sat, Jun 01, 2013 at 08:04PM, Alejandro Abdelnur wrote: >> On RC1, verified MD5 & signature, built, configured pseudo cluster, run a >> couple of sample jobs, tested HTTPFS. >> >> CHANGES.txt files contents are correct now. Still, a minor NIT, they have >> 2.0.5 as UNRELEASED, shouldn't they have a date (I would assume the date >> the vote ends). >> >> Thanks >> >> >> On Fri, May 31, 2013 at 9:39 PM, J. Rottinghuis >> wrote: >> >>> Thanks for fixing Cos. >>> http://people.apache.org/~cos/hadoop-2.0.5-alpha-rc1/ >>> looks good to me. >>> +1 (non-binding) >>> >>> Thanks, >>> >>> Joep >>> >>> >>> On Fri, May 31, 2013 at 8:25 PM, Konstantin Boudnik >>> wrote: >>> >>>> Ok, WRT HDFS-4646 - it is all legit and the code is in branch-2.0.4-alpha >>>> and >>>> later. It has been committed as >>>> r1465124 >>>> The reason it isn't normally visible because of the weird commit message: >>>> >>>>svn merge -c1465121 from trunk >>>> >>>> So, we good. I am done with the CHANGES.txt fixed that you guys have >>> noted >>>> earlier and will be re-spinning RC1 in a few. >>>> >>>> Cos >>>> >>>> On Fri, May 31, 2013 at 08:07PM, Konstantin Boudnik wrote: >>>>> Alejandro, >>>>> >>>>> thanks for looking into this. Indeed - I missed the 2.0.5-alpha section >>>> in >>>>> YARN CHANGES.txt. Added now. As for HDFS-4646: apparently I didn't get >>>> into >>>>> to branch-2.0.4-alpha back then, although I distinctively remember >>> doing >>>> this. >>>>> Let me pull it into 2.0.5-alpha and update CHANGES.txt to reflect it. >>>> Also, I >>>>> will do JIRA in a moment. >>>>> >>>>> Joep, appreciate the thorough examination. I have fixed the dates for >>> the >>>>> releases 2.0.4-alpha. As for the top-level readme file - sorry I wasn't >>>> aware >>>>> about them. As for the binary: I am pretty sure we are only releasing >>>> source >>>>> code, but I will put binaries into the rc1 respin. >>>>> >>>>> I will respin rc1 shortly. Appreciate the feedback! >>>>> Cos >>>>> >>>>> On Fri, May 31, 2013 at 05:27PM, Alejandro Abdelnur wrote: >>>>>> Verified MD5 & signature, built, configured pseudo cluster, run a >>>> couple of >>>>>> sample jobs, tested HTTPFS. >>>>>> >>>>>> Still, something seems odd. >>>>>> >>>>>> The HDFS CHANGES.txt has the following entry under 2.0.5-alpha: >>>>>> >>>>>> HDFS-4646. createNNProxyWithClientProtocol ignores configured >>> timeout >>>>>> value (Jagane Sundar via cos) >>>>>> >>>>>> but I don't see that in the branch. >>>>>> >>>>>> And, the YARN CHANGES.txt does not have the 2.0.5-alpha section (it >>>> should >>>>>> be there empty). >>>>>> >>>>>> Cos, can you please look at these 2 things and explain/fix? >>>>>> >>>>>> Thanks. >>>>>> >>>>>> >>>>>> >>>>>> On Fri, May 31, 2013 at 4:04 PM, Konstantin Boudnik >>>> wrote: >>>>>> >>>>>>> All, >>>>>>> >>>>>>> I have created a release candidate (rc0) for hadoop-2.0.5-alpha >>> that >>>> I >>>>>>> would >>>>>>> like to release. >>>>>>> >>>>>>> This is a stabilization release that includes fixed for a couple a >>> of >>>>>>> issues >>>>>>> discovered in the testing with BigTop 0.6.0 release candidate. >>>>>>> >>>>>>> The RC is available at: >>>>>>> http://people.apache.org/~cos/hadoop-2.0.5-alpha-rc0/ >>>>>>> The RC tag in svn is here: >>>>>>> >>>> >>> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.5-alpha-rc0 >>>>>>> >>>>>>> The maven artifacts will be available via repository.apache.org on >>>> Sat, >>>>>>> June >>>>>>> 1st, 2013 at 2 pm PDT as outlined here >>>>>>>http://s.apache.org/WKD >>>>>>> >>>>>>> Please try the release bits and vote; the vote will run for the 3 >>>> days, >>>>>>> because this is just a version name change. The bits are identical >>>> to the >>>>>>> ones >>>>>>> voted on before in >>>>>>>http://s.apache.org/2041move >>>>>>> >>>>>>> Thanks for your voting >>>>>>> Cos >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Alejandro >>>> >>>> >>>> >>> >> >> >> >> -- >> Alejandro
Re: [VOTE] Release Apache Hadoop 2.0.5-alpha
On RC1, verified MD5 & signature, built, configured pseudo cluster, run a couple of sample jobs, tested HTTPFS. CHANGES.txt files contents are correct now. Still, a minor NIT, they have 2.0.5 as UNRELEASED, shouldn't they have a date (I would assume the date the vote ends). Thanks On Fri, May 31, 2013 at 9:39 PM, J. Rottinghuis wrote: > Thanks for fixing Cos. > http://people.apache.org/~cos/hadoop-2.0.5-alpha-rc1/ > looks good to me. > +1 (non-binding) > > Thanks, > > Joep > > > On Fri, May 31, 2013 at 8:25 PM, Konstantin Boudnik > wrote: > > > Ok, WRT HDFS-4646 - it is all legit and the code is in branch-2.0.4-alpha > > and > > later. It has been committed as > > r1465124 > > The reason it isn't normally visible because of the weird commit message: > > > > svn merge -c1465121 from trunk > > > > So, we good. I am done with the CHANGES.txt fixed that you guys have > noted > > earlier and will be re-spinning RC1 in a few. > > > > Cos > > > > On Fri, May 31, 2013 at 08:07PM, Konstantin Boudnik wrote: > > > Alejandro, > > > > > > thanks for looking into this. Indeed - I missed the 2.0.5-alpha section > > in > > > YARN CHANGES.txt. Added now. As for HDFS-4646: apparently I didn't get > > into > > > to branch-2.0.4-alpha back then, although I distinctively remember > doing > > this. > > > Let me pull it into 2.0.5-alpha and update CHANGES.txt to reflect it. > > Also, I > > > will do JIRA in a moment. > > > > > > Joep, appreciate the thorough examination. I have fixed the dates for > the > > > releases 2.0.4-alpha. As for the top-level readme file - sorry I wasn't > > aware > > > about them. As for the binary: I am pretty sure we are only releasing > > source > > > code, but I will put binaries into the rc1 respin. > > > > > > I will respin rc1 shortly. Appreciate the feedback! > > > Cos > > > > > > On Fri, May 31, 2013 at 05:27PM, Alejandro Abdelnur wrote: > > > > Verified MD5 & signature, built, configured pseudo cluster, run a > > couple of > > > > sample jobs, tested HTTPFS. > > > > > > > > Still, something seems odd. > > > > > > > > The HDFS CHANGES.txt has the following entry under 2.0.5-alpha: > > > > > > > > HDFS-4646. createNNProxyWithClientProtocol ignores configured > timeout > > > > value (Jagane Sundar via cos) > > > > > > > > but I don't see that in the branch. > > > > > > > > And, the YARN CHANGES.txt does not have the 2.0.5-alpha section (it > > should > > > > be there empty). > > > > > > > > Cos, can you please look at these 2 things and explain/fix? > > > > > > > > Thanks. > > > > > > > > > > > > > > > > On Fri, May 31, 2013 at 4:04 PM, Konstantin Boudnik > > wrote: > > > > > > > > > All, > > > > > > > > > > I have created a release candidate (rc0) for hadoop-2.0.5-alpha > that > > I > > > > > would > > > > > like to release. > > > > > > > > > > This is a stabilization release that includes fixed for a couple a > of > > > > > issues > > > > > discovered in the testing with BigTop 0.6.0 release candidate. > > > > > > > > > > The RC is available at: > > > > > http://people.apache.org/~cos/hadoop-2.0.5-alpha-rc0/ > > > > > The RC tag in svn is here: > > > > > > > > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.5-alpha-rc0 > > > > > > > > > > The maven artifacts will be available via repository.apache.org on > > Sat, > > > > > June > > > > > 1st, 2013 at 2 pm PDT as outlined here > > > > > http://s.apache.org/WKD > > > > > > > > > > Please try the release bits and vote; the vote will run for the 3 > > days, > > > > > because this is just a version name change. The bits are identical > > to the > > > > > ones > > > > > voted on before in > > > > > http://s.apache.org/2041move > > > > > > > > > > Thanks for your voting > > > > > Cos > > > > > > > > > > > > > > > > > > > > > > -- > > > > Alejandro > > > > > > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.0.5-alpha
Verified MD5 & signature, built, configured pseudo cluster, run a couple of sample jobs, tested HTTPFS. Still, something seems odd. The HDFS CHANGES.txt has the following entry under 2.0.5-alpha: HDFS-4646. createNNProxyWithClientProtocol ignores configured timeout value (Jagane Sundar via cos) but I don't see that in the branch. And, the YARN CHANGES.txt does not have the 2.0.5-alpha section (it should be there empty). Cos, can you please look at these 2 things and explain/fix? Thanks. On Fri, May 31, 2013 at 4:04 PM, Konstantin Boudnik wrote: > All, > > I have created a release candidate (rc0) for hadoop-2.0.5-alpha that I > would > like to release. > > This is a stabilization release that includes fixed for a couple a of > issues > discovered in the testing with BigTop 0.6.0 release candidate. > > The RC is available at: > http://people.apache.org/~cos/hadoop-2.0.5-alpha-rc0/ > The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.5-alpha-rc0 > > The maven artifacts will be available via repository.apache.org on Sat, > June > 1st, 2013 at 2 pm PDT as outlined here > http://s.apache.org/WKD > > Please try the release bits and vote; the vote will run for the 3 days, > because this is just a version name change. The bits are identical to the > ones > voted on before in > http://s.apache.org/2041move > > Thanks for your voting > Cos > > -- Alejandro
Re: Heads up: moving from 2.0.4.1-alpha to 2.0.5-alpha
Cos, just to be clear, this is happening SAT JUN01 1PM-2PM PST, not now (FRI MAY31 1PM PST). Correct? Thx On Fri, May 31, 2013 at 12:45 PM, Konstantin Boudnik wrote: > Guys, > > I will be performing some changes wrt to moving 2.0.4.1 release candidate > to > 2.0.5 space. As outline below by Alejandro: > > 1. I will create new 2.0.5-alpha branch from the current head of > 2.0.4-alpha > that contains 2.0.4.1 changes > 2. consequently, set the artifacts version on the new branch to be > 2.0.5-alpha > 3. the CHANGES.txt will be updated accordingly on the new 2.0.5 branch > 4. At this point I can cut an RC and put it out for re-vote. The staging > can > be done after the next two steps. > > I will be doing all these modifications in the next hour or so. > > Tomorrow at 1 pm PDT I would like to: > 1. update the version of the artifacts on branch-2 to become 2.1.0-SNAPSHOT > 2. update the CHANGES.txt in the trunk and branch-2 to reflect new version > names > 3. at this point it should safe to do the staging for 2.0.5-alpha RC > > To avoid any collisions during the last two steps - especially 2. - I would > ask everyone to hold off the modifications of the CHANGES.txt files on > trunk > and branch-2 between 1 pm and 2 pm PDT. > > Please let me know if you see any flaw above, questions. > Cos > > > As we change from 2.0.4.1 to 2.0.5 you'll need to do the following > > housekeeping as you work the new RC. > > > > * rename the svn branch > > * update the versions in the POMs > > * update the CHANGES.txt in trunk, branch-2 and the release branch > > * change the current 2.0.5 version in JIRA to 2.1.0, create a new 2.0.5 > > version, change the fix version of the 2 JIRAs that make the RC > > > I renamed 2.0.5-beta to 2.1.0-beta and 2.0.4.1-alpha to 2.0.5-alpha > versions > > in jira for HADOOP, HDFS, YARN & MAPREDUCE. > > > Please take care of the rest. > > > Also, in branch-2, the version should be 2.1.0-SNAPSHOT. > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.0.4.1-alpha
Konstantin, Cos, As we change from 2.0.4.1 to 2.0.5 you'll need to do the following housekeeping as you work the new RC. * rename the svn branch * update the versions in the POMs * update the CHANGES.txt in trunk, branch-2 and the release branch * change the current 2.0.5 version in JIRA to 2.1.0, create a new 2.0.5 version, change the fix version of the 2 JIRAs that make the RC Thanks. On Thu, May 30, 2013 at 6:18 PM, Chris Douglas wrote: > On Thu, May 30, 2013 at 5:51 PM, Konstantin Boudnik > wrote: > > I have no issues of changing the version to 2.0.5-alpha and restarting > to vote > > for the release content, e.g. 2 bug fixes. Shall I call 3 days re-vote > because > > of the number change? > > +1 Sounds great. > > > Does the result of bylaw vote nullifies the unfinished vote started by > Arun? > > Sorry, I am dense, apparently. > > Yes, nobody should feel bound by either vote. The bylaw change > clarifies that release plans are for RMs to solicit feedback and gauge > PMC support for an artifact, not pre-approvals for doing work. > > > Can we limit the vote thread to the merits of the release then? > > Happily. > > > That sound like adding an insult to injury, if my forth-language skills > do not > > mislead me. > > They do mislead you, or I've expressed the point imprecisely. We can > take this offline. -C > > >> >> > On Thu, May 30, 2013 at 01:48PM, Chris Douglas wrote: > >> >> >> On Thu, May 30, 2013 at 10:57 AM, Arun C Murthy < > a...@hortonworks.com> wrote: > >> >> >> > Why not include MAPREDUCE-4211 as well rather than create one > release per patch? > >> >> >> > >> >> >> From Cos's description, it sounded like these were backports of > fixes > >> >> >> to help Sqoop2 and fix some build issues. If it's not just to > fixup > >> >> >> leftover bugs in 2.0.4 *once* so downstream projects can integrate > >> >> >> against 2.0.4.1, and this a release series, then I've completely > >> >> >> misunderstood the purpose. > >> >> >> > >> >> >> Cos, are you planning 2.0.4.2? > >> >> >> > >> >> >> > Also, this is the first time we are seeing a four-numbered > scheme in Hadoop. Why not call this 2.0.5-alpha? > >> >> >> > >> >> >> Good point. Since it contains only backports from branch-2, it > would > >> >> >> make sense for it to be an intermediate release. > >> >> >> > >> >> >> I shouldn't have to say this, but I'm changing my vote to -1 > while we > >> >> >> work this out. -C > >> >> >> > >> >> >> > On May 24, 2013, at 8:48 PM, Konstantin Boudnik wrote: > >> >> >> > > >> >> >> >> All, > >> >> >> >> > >> >> >> >> I have created a release candidate (rc0) for > hadoop-2.0.4.1-alpha that I would > >> >> >> >> like to release. > >> >> >> >> > >> >> >> >> This is a stabilization release that includes fixed for a > couple a of issues > >> >> >> >> discovered in the testing with BigTop 0.6.0 release candidate. > >> >> >> >> > >> >> >> >> The RC is available at: > http://people.apache.org/~cos/hadoop-2.0.4.1-alpha-rc0/ > >> >> >> >> The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.4.1-alpha-rc0 > >> >> >> >> > >> >> >> >> The maven artifacts are available via repository.apache.org. > >> >> >> >> > >> >> >> >> Please try the release bits and vote; the vote will run for > the usual 7 days. > >> >> >> >> > >> >> >> >> Thanks for your voting > >> >> >> >> Cos > >> >> >> >> > >> >> >> > > >> >> >> > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.0.4.1-alpha
On the version number we use, if it is greater than 2.0.4, I really don't care. Though I think Konstantin argument that branch-2 is publishing as 2.0.5-SNAPSHOT has some ground (still, it could be argued that they are DEV JARs so they can be in flux). On the changes that went into this RC, they are exactly a fix for Sqoop2 to work with the release and a build fix for Bigtop. As far as I can tell there is nothing else in this RC as compared with 2.0.4-alpha. So, effectively, this RC is just a fixup of 2.0.4 for Sqoop and Bigtop. MAPREDUCE-5211 seems nasty enough to be included. But I'd leave that to the RM (Cos in this case) to decide if he wants to go ahead without it and then do a 2.0.4.2. Personally i would cut a second RC including MAPREDUCE-5211. But I don't think that not having it would be reason enough for a -1 (if that is the reason for the -1). Thanks. On Thu, May 30, 2013 at 1:48 PM, Chris Douglas wrote: > On Thu, May 30, 2013 at 10:57 AM, Arun C Murthy > wrote: > > Why not include MAPREDUCE-4211 as well rather than create one release > per patch? > > From Cos's description, it sounded like these were backports of fixes > to help Sqoop2 and fix some build issues. If it's not just to fixup > leftover bugs in 2.0.4 *once* so downstream projects can integrate > against 2.0.4.1, and this a release series, then I've completely > misunderstood the purpose. > > Cos, are you planning 2.0.4.2? > > > Also, this is the first time we are seeing a four-numbered scheme in > Hadoop. Why not call this 2.0.5-alpha? > > Good point. Since it contains only backports from branch-2, it would > make sense for it to be an intermediate release. > > I shouldn't have to say this, but I'm changing my vote to -1 while we > work this out. -C > > > On May 24, 2013, at 8:48 PM, Konstantin Boudnik wrote: > > > >> All, > >> > >> I have created a release candidate (rc0) for hadoop-2.0.4.1-alpha that > I would > >> like to release. > >> > >> This is a stabilization release that includes fixed for a couple a of > issues > >> discovered in the testing with BigTop 0.6.0 release candidate. > >> > >> The RC is available at: > http://people.apache.org/~cos/hadoop-2.0.4.1-alpha-rc0/ > >> The RC tag in svn is here: > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.4.1-alpha-rc0 > >> > >> The maven artifacts are available via repository.apache.org. > >> > >> Please try the release bits and vote; the vote will run for the usual 7 > days. > >> > >> Thanks for your voting > >> Cos > >> > > > > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 0.23.8
+1, verified MD5 and signature. Did a full build, started pseudo cluster, run a few MR jobs, verified httpfs works. Thanks. On Tue, May 28, 2013 at 9:00 AM, Thomas Graves wrote: > > I've created a release candidate (RC0) for hadoop-0.23.8 that I would like > to release. > > This release is a sustaining release with several important bug fixes in > it. The most critical one is MAPREDUCE-5211. > > The RC is available at: > http://people.apache.org/~tgraves/hadoop-0.23.8-candidate-0/ > The RC tag in svn is here: > http://svn.apache.org/viewvc/hadoop/common/tags/release-0.23.8-rc0/ > > The maven artifacts are available via repository.apache.org. > > Please try the release and vote; the vote will run for the usual 7 days. > > I am +1 (binding). > > thanks, > Tom Graves > > -- Alejandro
Re: [VOTE] Release Apache Hadoop 2.0.4.1-alpha
+1, verified MD5 and signature. Did a full build, started pseudo cluster, run a few MR jobs, verified httpfs works. Thanks. On Sat, May 25, 2013 at 10:01 AM, Sangjin Lee wrote: > +1 (non-binding) > > Thanks, > Sangjin > > > On Fri, May 24, 2013 at 8:48 PM, Konstantin Boudnik > wrote: > > > All, > > > > I have created a release candidate (rc0) for hadoop-2.0.4.1-alpha that I > > would > > like to release. > > > > This is a stabilization release that includes fixed for a couple a of > > issues > > discovered in the testing with BigTop 0.6.0 release candidate. > > > > The RC is available at: > > http://people.apache.org/~cos/hadoop-2.0.4.1-alpha-rc0/ > > The RC tag in svn is here: > > > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.4.1-alpha-rc0 > > > > The maven artifacts are available via repository.apache.org. > > > > Please try the release bits and vote; the vote will run for the usual 7 > > days. > > > > Thanks for your voting > > Cos > > > > > -- Alejandro
Re: CHANGES.txt out of sync in the different branches
Thanks for taking care of this Sid. Agree that using jira fix versions would be easier as a way to generate the changes. It would require some proper handling by the committer. For example, if something is committed to trunk(3.0.0) and branch-2(2.0.5) it would have fixedVersion 2.0.5, if later it is backported to a soon to be release 2.0.4, the fixVersion must be updated to 2.0.4. Another scenario is when you have different release lines, i.e 1.x and 2.x (or 2.0.x and 2.1.x), then the fixVersion should contain each line version. Still, IMO, much better than dealing with CHANGES.txt in different branches. Thanks again On Thu, Apr 11, 2013 at 12:44 AM, Siddharth Seth wrote: > I went ahead and committed a couple of changes to trunk and branch-2 to fix > the MR CHANGES.txt mess. Alexandro, I believe there's a couple of jiras > where you were waiting for CHANGES.txt fixes before merging to branch-2. > Not sure about past discussions, but can we re-visit removing CHANGES.txt > in favour of jira fix versions. > > While we maintain this file, a couple of points to prevent this ... > - For changes that go into 0.23, the CHANGES.txt entry needs to be included > under the 2.x line as well since 2.x and 0.23 are on a different release > schedule. > - When merging into a branch, please update CHANGES.txt on the src branch > as well. trunk was out of sync on a lot of CHANGES which have gone into > 2.0.5. > - Something going wrong with these files isn't always discovered > immediately, but adds up fast - so a little extra caution with changes, > merge-conflicts etc on these files... > > There's jiras like MAPREDUCE-4790, for which the fix version on jira is > trunk-win, but the patch is already in trunk. Does fix version trunk-win > imply the patch is in trunk as well ? > > Thanks > - Sid > > > On Wed, Apr 3, 2013 at 3:02 PM, Vinod Kumar Vavilapalli < > vino...@hortonworks.com> wrote: > > > I started looking at this yesterday, MAPREDUCE CHANGES.txt is completely > > broken. I'll try to fix it and then send out a note once I am done. > > > > Thanks, > > +Vinod > > > > On Apr 1, 2013, at 11:46 PM, Vinod Kumar Vavilapalli wrote: > > > > > > > > I've been looking at YARN and it seems to be fine. I presume common and > > hdfs too. > > > > > > MR clearly has issues. Have to manually fix it. Will do something > > tomorrow first thing. > > > > > > Thanks, > > > +Vinod Kumar Vavilapalli > > > > > > On Apr 1, 2013, at 3:53 PM, Alejandro Abdelnur wrote: > > > > > >> while trying to commit MAPREDUCE-5113 to branch-2 I've noticed that > the > > >> CHANGES.txt are out of sync. Commit message are on the wrong releases. > > >> > > >> I've spent some some time trying to fix it, but I did not find it > > straight > > >> forward to do so. > > >> > > >> I assume the same may be true for common, hdfs and yarn. > > >> > > >> I know why we use CHANGES.txt files has been discussed in the past, so > > I'll > > >> not raise a suggestion to get rid of them. > > >> > > >> Does anybody has a simple list of steps to fix this ? > > >> > > >> Thx > > >> > > >> -- > > >> Alejandro > > > > > > > > -- Alejandro
[jira] [Created] (MAPREDUCE-5123) Reimplement things
Alejandro Abdelnur created MAPREDUCE-5123: - Summary: Reimplement things Key: MAPREDUCE-5123 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5123 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.4-alpha Reporter: Alejandro Abdelnur Priority: Blocker We've got to the point we need to reimplement things from scratch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira