Re: Will there be a 2.2 patch releases?
the last discussion on this was in november -I presume that's still the plan http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201311.mbox/%3CA31E1430-33BE-437C-A61E-050F9A67C109%40hortonworks.com%3E On 3 January 2014 04:10, Raymie Stata rst...@altiscale.com wrote: Nudge, any thoughts? On Sun, Dec 29, 2013 at 1:25 AM, Raymie Stata rst...@altiscale.com wrote: In discussing YARN-1295 it's become clear that I'm confused about the outcome of the Next releases thread. I had assumed there would be patch releases to 2.2, and indeed one would be coming out early Q1. Is this correct? If so, then things seem a little messed-up right now in 2.2-land. There already is a branch-2.2.1, but there hasn't been a release. And branch-2.2 has Maven version 2.2.2-SNAPSHOT. Due to the 2.3 rename a few weeks ago, it might be that the first patch release for 2.2 needs to be 2.2.2. But if so, notice these lists of fixes for 2.2.1: https://issues.apache.org/jira/browse/YARN/fixforversion/12325667 https://issues.apache.org/jira/browse/HDFS/fixforversion/12325666 Do these need to have their fix-versions updated? Raymie P.S. While we're on the subject of point releases, let me check my assumptions. I assumed that, for release x.y.z, fixes deemed to be critical bug fixes would be put into branch-x.y as a matter of course. The Maven release-number in branch-x.y would be x.y.(z+1)-SNAPSHOT, and JIRAs (to be) committed to branch-x.y would have x.y.(z+1) as one of their fix-versions. When enough fixes have accumulated to warrant a release, or when a fix comes up that is critical enough to warrant an immediate release, then branch-x-y is branched to branch-x.y.(z+1), and a release is made. (As Hadoop itself moves from x.y to x.(y+1) and then x.(y+2), the threshold for what is considered to be a critical bug would naturally start to rise, as the effort of back-porting goes up.) Do I have it right? -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Build failed in Jenkins: Hadoop-Common-trunk #1001
See https://builds.apache.org/job/Hadoop-Common-trunk/1001/changes Changes: [todd] HADOOP-10200. Fix precommit script patch_tested.txt fallback option. Contributed by Brock Noland. [wang] HDFS-5651. Remove dfs.namenode.caching.enabled and improve CRM locking. Contributed by Colin Patrick McCabe. [vinodkv] YARN-1493. Changed ResourceManager and Scheduler interfacing to recognize app-attempts separately from apps. Contributed by Jian He. [wang] HDFS-5659. dfsadmin -report doesn't output cache information properly. Contributed by Andrew Wang. [wang] Amend CHANGES.txt for HADOOP-10198 [wang] HADOOP-10198. DomainSocket: add support for socketpair. Contributed by Colin Patrick McCabe. [vinodkv] YARN-1549. Fixed a bug in ResourceManager's ApplicationMasterService that was causing unamanged AMs to not finish correctly. Contributed by haosdent. [todd] HADOOP-10199. Precommit Admin build is not running because no previous successful build is available. Contributed by Brock Noland. [kihwal] HADOOP-10173. Remove UGI from DIGEST-MD5 SASL server creation. Contributed by Daryn Sharp. [stevel] HADOOP-10147 HDFS-5678 Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster -- [...truncated 64245 lines...] Adding reference: maven.local.repository [DEBUG] Initialize Maven Ant Tasks parsing buildfile jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml with URI = jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml from a zip file parsing buildfile jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml from a zip file Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader (parentFirst) +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent loader (parentFirst) +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask Setting project property: test.build.dir - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir Setting project property: test.exclude.pattern - _ Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT Setting project property: test.exclude - _ Setting project property: distMgmtSnapshotsId - apache.snapshots.https Setting project property: project.build.sourceEncoding - UTF-8 Setting project property: java.security.egd - file:///dev/urandom Setting project property: distMgmtSnapshotsUrl - https://repository.apache.org/content/repositories/snapshots Setting project property: distMgmtStagingUrl - https://repository.apache.org/service/local/staging/deploy/maven2 Setting project property: avro.version - 1.7.4 Setting project property: test.build.data - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir Setting project property: commons-daemon.version - 1.0.13 Setting project property: hadoop.common.build.dir - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target Setting project property: testsThreadCount - 4 Setting project property: maven.test.redirectTestOutputToFile - true Setting project property: jdiff.version - 1.0.9 Setting project property: distMgmtStagingName - Apache Release Distribution Repository Setting project property: project.reporting.outputEncoding - UTF-8 Setting project property: build.platform - Linux-i386-32 Setting project property: protobuf.version - 2.5.0 Setting project property: failIfNoTests - false Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH} Setting project property: jersey.version - 1.9 Setting project property: distMgmtStagingId - apache.staging.https Setting project property: distMgmtSnapshotsName - Apache Development Snapshot Repository Setting project property: ant.file - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml [DEBUG] Setting properties with prefix: Setting project property: project.groupId - org.apache.hadoop Setting project property: project.artifactId - hadoop-common-project Setting project property: project.name - Apache Hadoop Common Project Setting project property: project.description - Apache Hadoop Common Project Setting project property: project.version - 3.0.0-SNAPSHOT Setting project property: project.packaging - pom Setting project property: project.build.directory - https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target Setting project property: project.build.outputDirectory -
MapReduce V1 vs MapReduce V2
I'm thoroughly confused about which API is the recent one, which is the old one and which method I should be using to write MapReduce applications. I'm under the impression that MRv2 is primarily driven by the org.apache.hadoop.mapreduce.* packages and MRv1 is primarily driven by the org.apache.hadoop.mapred.* packages. I've been led to believe that MRv2 applications extend MapReduceBase and implement Mapper, Reducer etc. and conversely the MRv1 applications extend Mapper, Reducer directly. However I can not find a canonical statement to back any of this up. What's more I keep finding conflicting statements about these, such as 'Hadoop - the definitive guide' gives example in MRv2 format but then I look at the examples and they use org.apache.hadoop.mapreduce.* packages, but extend Mapper and extend Reducer, not MapReduceBase... Can someone either point me at a canonical resource or just confirm / deny my assumptions? Kind regards -- [image: cid:1CBF4038-3F0F-4FC2-A1FF-6DC81B8B6F94] First Option Software Ltd Signal House Jacklyns Lane Alresford SO24 9JJ Tel: +44 (0)1962 738232 Mob: +44 (0)7710 160458 Fax: +44 (0)1962 600112 Web: www.b http://www.fosolutions.co.uk/espokesoftware.comhttp://bespokesoftware.com/ -- This is confidential, non-binding and not company endorsed - see full terms at www.fosolutions.co.uk/emailpolicy.html First Option Software Ltd Registered No. 06340261 Signal House, Jacklyns Lane, Alresford, Hampshire, SO24 9JJ, U.K.
Video: how to commit a patch to hadoop
For new committers and for the curious, I've just stuck up a screen capture with commentary on how to commit a patch to the Hadoop SVN repository http://youtu.be/txW3m7qWdzw It's 27 minutes long, not just because of the commentary but because you have to be as rigorous committing a one-line patch as you do to a whole new module. I'd love to see how the git+gerrit projects work (like accumulo) to see if we could speed up both the review and the checkin. -steve -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Video: how to commit a patch to hadoop
great~ Thanks Steve 2014/1/3 Steve Loughran ste...@hortonworks.com For new committers and for the curious, I've just stuck up a screen capture with commentary on how to commit a patch to the Hadoop SVN repository http://youtu.be/txW3m7qWdzw It's 27 minutes long, not just because of the commentary but because you have to be as rigorous committing a one-line patch as you do to a whole new module. I'd love to see how the git+gerrit projects work (like accumulo) to see if we could speed up both the review and the checkin. -steve -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- *Regards,* *Zhaojie*
Re: MapReduce V1 vs MapReduce V2
Hi Matt, in my opinion, the basic difference between MapReduce V1 and V2 is not about mapred or mapreduce API package, but about the platform to run the job. When it was MapReduce V1, the job was managed by JobTracker and TaskTracker. After upgrading to MapReduce V2, the resource management part in the MapReduce project has been spun off, and evolves to ben YARN, a generic distributed resource management system. MapReduce as well as other types of applications can run on the common platform. On the other side, the remaining part, which is code base of MapReduce V2, is a pure distributed computation framework. With regard to the API packages, both mapred.* and mapreduce.* have been existing since MapReduce V1, but mapreduce.* has been involving a lot. If you're writing a new MapReduce application referring to the latest Hadoop libraries, it's MapReduce V2 no matter whether you're using mapred.* or mapreduce.*. If you already has some MapReduce applications that were built with MapReduce V1 framework, and use mapred.* APIs, they are supposed to be run on YARN without problems. However, it those applications use mapreduce.* APIs, you may need to compile them MapReduce V2 framework to be able to run them on YARN. Here're a bunch of resources that you may want to have a look for further information: http://hortonworks.com/hadoop/yarn/ http://hortonworks.com/blog/running-existing-applications-on-hadoop-2-yarn/ http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/ Thanks, Zhijie On Fri, Jan 3, 2014 at 2:19 AM, Matt Fellows matt.fell...@bespokesoftware.com wrote: I'm thoroughly confused about which API is the recent one, which is the old one and which method I should be using to write MapReduce applications. I'm under the impression that MRv2 is primarily driven by the org.apache.hadoop.mapreduce.* packages and MRv1 is primarily driven by the org.apache.hadoop.mapred.* packages. I've been led to believe that MRv2 applications extend MapReduceBase and implement Mapper, Reducer etc. and conversely the MRv1 applications extend Mapper, Reducer directly. However I can not find a canonical statement to back any of this up. What's more I keep finding conflicting statements about these, such as 'Hadoop - the definitive guide' gives example in MRv2 format but then I look at the examples and they use org.apache.hadoop.mapreduce.* packages, but extend Mapper and extend Reducer, not MapReduceBase... Can someone either point me at a canonical resource or just confirm / deny my assumptions? Kind regards -- [image: cid:1CBF4038-3F0F-4FC2-A1FF-6DC81B8B6F94] First Option Software Ltd Signal House Jacklyns Lane Alresford SO24 9JJ Tel: +44 (0)1962 738232 Mob: +44 (0)7710 160458 Fax: +44 (0)1962 600112 Web: www.b http://www.fosolutions.co.uk/espokesoftware.comhttp://bespokesoftware.com/ This is confidential, non-binding and not company endorsed - see full terms at www.fosolutions.co.uk/emailpolicy.html First Option Software Ltd Registered No. 06340261 Signal House, Jacklyns Lane, Alresford, Hampshire, SO24 9JJ, U.K. -- Zhijie Shen Hortonworks Inc. http://hortonworks.com/ -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Created] (HADOOP-10203) Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata
Andrei Savu created HADOOP-10203: Summary: Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata Key: HADOOP-10203 URL: https://issues.apache.org/jira/browse/HADOOP-10203 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Environment: CDH 2.0.0-cdh4.5.0 (30821ec616ee7a21ee8447949b7c6208a8f1e7d8) Reporter: Andrei Savu Attachments: HADOOP-10203.patch Jets3tNativeFileSystemStore#retrieveMetadata is leaking connections. This affects any client that tries to read many small files very quickly (e.g. distcp from s3 to hdfs with small files blocks due to connection pool starvation). This is not a problem for larger files because when the GC runs any connection that's out of scope will be released in #finalize(). We are seeing the following log messages as a symptom of this problem: {noformat} 13/12/26 13:40:01 WARN httpclient.HttpMethodReleaseInputStream: Attempting to release HttpMethod in finalize() as its response data stream has gone out of scope. This attempt will not always succeed and cannot be relied upon! Please ensure response data streams are always fully consumed or closed to avoid HTTP connection starvation. 13/12/26 13:40:01 WARN httpclient.HttpMethodReleaseInputStream: Successfully released HttpMethod in finalize(). You were lucky this time... Please ensure response data streams are always fully consumed or closed. {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HADOOP-10204) ThrottledInputStream should #close() the underlying stream
Andrei Savu created HADOOP-10204: Summary: ThrottledInputStream should #close() the underlying stream Key: HADOOP-10204 URL: https://issues.apache.org/jira/browse/HADOOP-10204 Project: Hadoop Common Issue Type: Bug Components: fs Environment: CDH 2.0.0-cdh4.5.0 (30821ec616ee7a21ee8447949b7c6208a8f1e7d8) Reporter: Andrei Savu While working on HADOOP-10203 I've noticed that ThrottledInputStream (DistCP V2) does not override #close(). This can also leak connection. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: Next releases
We plan to merge HDFS-2832 to branch-2 next week for inclusion in 2.4. On Fri, Dec 6, 2013 at 1:53 PM, Arun C Murthy a...@hortonworks.com wrote: Thanks Suresh Colin. Please update the Roadmap wiki with your proposals. As always, we will try our best to get these in - but we can collectively decide to slip some of these to subsequent releases based on timelines. Arun On Dec 6, 2013, at 10:43 AM, Suresh Srinivas sur...@hortonworks.com wrote: Arun, I propose the following changes for 2.3: - There have been a lot of improvements related to supporting http policy. - There is a still discussion going on, but I would like to deprecate BackupNode in 2.3 as well. - We are currently working on rolling upgrades related change in HDFS. We might add a couple of changes that enables rolling upgrades from 2.3 onwards (hopefully we can this done by December) I propose the following for 2.4 release, if they are tested and stable: - Heterogeneous storage support - HDFS-2832 - Datanode cache related change - HDFS-4949 - HDFS ACLs - HDFS-4685 - Rolling upgrade changes Let me know if you want me to update the wiki. Regards, Suresh On Dec 6, 2013, at 12:27 PM, Colin McCabe cmcc...@alumni.cmu.edu wrote: If 2.4 is released in January, I think it's very unlikely to include symlinks. There is still a lot of work to be done before they're usable. You can look at the progress on HADOOP-10019. For some of the subtasks, it will require some community discussion before any code can be written. For better or worse, symlinks have not been requested by users as often as features like NFS export, HDFS caching, ACLs, etc, so effort has been focused on those instead. For now, I think we should put the symlinks-disabling patches (HADOOP-10020, etc) into branch-2, so that they will be part of the next releases without additional effort. I would like to see HDFS caching make it into 2.4. The APIs and implementation are beginning to stabilize, and around January it should be ok to backport to a stable branch. best, Colin On Thu, Nov 7, 2013 at 6:42 PM, Arun C Murthy a...@hortonworks.com wrote: Gang, Thinking through the next couple of releases here, appreciate f/b. # hadoop-2.2.1 I was looking through commit logs and there is a *lot* of content here (81 commits as on 11/7). Some are features/improvements and some are fixes - it's really hard to distinguish what is important and what isn't. I propose we start with a blank slate (i.e. blow away branch-2.2 and start fresh from a copy of branch-2.2.0) and then be very careful and meticulous about including only *blocker* fixes in branch-2.2. So, most of the content here comes via the next minor release (i.e. hadoop-2.3) In future, we continue to be *very* parsimonious about what gets into a patch release (major.minor.patch) - in general, these should be only *blocker* fixes or key operational issues. # hadoop-2.3 I'd like to propose the following features for YARN/MR to make it into hadoop-2.3 and punt the rest to hadoop-2.4 and beyond: * Application History Server - This is happening in a branch and is close; with it we can provide a reasonable experience for new frameworks being built on top of YARN. * Bug-fixes in RM Restart * Minimal support for long-running applications (e.g. security) via YARN-896 * RM Fail-over via ZKFC * Anything else? HDFS??? Overall, I feel like we have a decent chance of rolling hadoop-2.3 by the end of the year. Thoughts? thanks, Arun -- Arun C. Murthy Hortonworks Inc. http://hortonworks.com/ -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- http://hortonworks.com/download/ -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- Arun C. Murthy Hortonworks
Edit permissions for my Hadoop wiki account
Hi all, Could someone give my wiki account edit permissions? Username is AndrewWang. Thanks, Andrew
Re: Edit permissions for my Hadoop wiki account
Could some kind admin do the same for my account too? My Hadoop wiki username is ArpitAgarwal Thanks! On Fri, Jan 3, 2014 at 4:54 PM, Andrew Wang andrew.w...@cloudera.comwrote: Hi all, Could someone give my wiki account edit permissions? Username is AndrewWang. Thanks, Andrew -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Will there be a 2.2 patch releases?
Yes, that thread is part of what's confusing me. Arun's initial 11/8 message suggests that there would be room for blocker fixes leading to a 2.2.1 patch release (...and then be very careful about including only *blocker* fixes in branch-2.2). And nothing else in that thread suggests that there wouldn't be a patch release. And yet, Sandy seems to think that 2.2.1 isn't happening at all (YARN-1295), a view that's consistent with the currently confused state of the repo (branch-2.2.1 exists but not released, branch-2.2 version is 2.2.2-SNAPSHOT). Seems to me that we should be planning for a 2.2.1 patch release at some point... Raymie On Fri, Jan 3, 2014 at 1:17 AM, Steve Loughran ste...@hortonworks.com wrote: the last discussion on this was in november -I presume that's still the plan http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201311.mbox/%3CA31E1430-33BE-437C-A61E-050F9A67C109%40hortonworks.com%3E On 3 January 2014 04:10, Raymie Stata rst...@altiscale.com wrote: Nudge, any thoughts? On Sun, Dec 29, 2013 at 1:25 AM, Raymie Stata rst...@altiscale.com wrote: In discussing YARN-1295 it's become clear that I'm confused about the outcome of the Next releases thread. I had assumed there would be patch releases to 2.2, and indeed one would be coming out early Q1. Is this correct? If so, then things seem a little messed-up right now in 2.2-land. There already is a branch-2.2.1, but there hasn't been a release. And branch-2.2 has Maven version 2.2.2-SNAPSHOT. Due to the 2.3 rename a few weeks ago, it might be that the first patch release for 2.2 needs to be 2.2.2. But if so, notice these lists of fixes for 2.2.1: https://issues.apache.org/jira/browse/YARN/fixforversion/12325667 https://issues.apache.org/jira/browse/HDFS/fixforversion/12325666 Do these need to have their fix-versions updated? Raymie P.S. While we're on the subject of point releases, let me check my assumptions. I assumed that, for release x.y.z, fixes deemed to be critical bug fixes would be put into branch-x.y as a matter of course. The Maven release-number in branch-x.y would be x.y.(z+1)-SNAPSHOT, and JIRAs (to be) committed to branch-x.y would have x.y.(z+1) as one of their fix-versions. When enough fixes have accumulated to warrant a release, or when a fix comes up that is critical enough to warrant an immediate release, then branch-x-y is branched to branch-x.y.(z+1), and a release is made. (As Hadoop itself moves from x.y to x.(y+1) and then x.(y+2), the threshold for what is considered to be a critical bug would naturally start to rise, as the effort of back-porting goes up.) Do I have it right? -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Edit permissions for my Hadoop wiki account
Added you both. On Fri, Jan 3, 2014 at 4:58 PM, Arpit Agarwal aagar...@hortonworks.com wrote: Could some kind admin do the same for my account too? My Hadoop wiki username is ArpitAgarwal Thanks! On Fri, Jan 3, 2014 at 4:54 PM, Andrew Wang andrew.w...@cloudera.comwrote: Hi all, Could someone give my wiki account edit permissions? Username is AndrewWang. Thanks, Andrew -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Edit permissions for my Hadoop wiki account
Thanks Eli. On Fri, Jan 3, 2014 at 5:01 PM, Eli Collins e...@cloudera.com wrote: Added you both. On Fri, Jan 3, 2014 at 4:58 PM, Arpit Agarwal aagar...@hortonworks.com wrote: Could some kind admin do the same for my account too? My Hadoop wiki username is ArpitAgarwal Thanks! On Fri, Jan 3, 2014 at 4:54 PM, Andrew Wang andrew.w...@cloudera.com wrote: Hi all, Could someone give my wiki account edit permissions? Username is AndrewWang. Thanks, Andrew -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Video: how to commit a patch to hadoop
Thank you Steve! I think we can put the video link on Hadoop wiki and update it if anything change in future. Thoughts? Thanks, Junping - Original Message - From: Steve Loughran ste...@hortonworks.com To: common-dev@hadoop.apache.org Sent: Friday, January 3, 2014 8:07:25 PM Subject: Video: how to commit a patch to hadoop For new committers and for the curious, I've just stuck up a screen capture with commentary on how to commit a patch to the Hadoop SVN repository https://urldefense.proofpoint.com/v1/url?u=http://youtu.be/txW3m7qWdzwk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=Mw3izENeqbMFnOzHo594vQ%3D%3D%0Am=dQmz4YgDBYrEohn2%2Bh4yof8U%2F0DcsSvkf7PO9Y0z4U4%3D%0As=b4775e4262718d49f385d7ad4b8205b5c646c89b25d991e3fb8fee8a57ca6b5b It's 27 minutes long, not just because of the commentary but because you have to be as rigorous committing a one-line patch as you do to a whole new module. I'd love to see how the git+gerrit projects work (like accumulo) to see if we could speed up both the review and the checkin. -steve -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You
Re: Will there be a 2.2 patch releases?
Re-reading the thread, it seems what I said about 2.2.1 never happening was incorrect. My impression is still that nobody has plans to drive a 2.2.1 release on any particular timeline. The changes that are now in 2.3 have been moved out of the branch-2.2.1. I suppose the idea is that changes slated for 2.2.1 should be committed both to branch-2.2 and branch-2.2.1. -Sandy On Fri, Jan 3, 2014 at 4:57 PM, Raymie Stata rst...@altiscale.com wrote: Yes, that thread is part of what's confusing me. Arun's initial 11/8 message suggests that there would be room for blocker fixes leading to a 2.2.1 patch release (...and then be very careful about including only *blocker* fixes in branch-2.2). And nothing else in that thread suggests that there wouldn't be a patch release. And yet, Sandy seems to think that 2.2.1 isn't happening at all (YARN-1295), a view that's consistent with the currently confused state of the repo (branch-2.2.1 exists but not released, branch-2.2 version is 2.2.2-SNAPSHOT). Seems to me that we should be planning for a 2.2.1 patch release at some point... Raymie On Fri, Jan 3, 2014 at 1:17 AM, Steve Loughran ste...@hortonworks.com wrote: the last discussion on this was in november -I presume that's still the plan http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201311.mbox/%3CA31E1430-33BE-437C-A61E-050F9A67C109%40hortonworks.com%3E On 3 January 2014 04:10, Raymie Stata rst...@altiscale.com wrote: Nudge, any thoughts? On Sun, Dec 29, 2013 at 1:25 AM, Raymie Stata rst...@altiscale.com wrote: In discussing YARN-1295 it's become clear that I'm confused about the outcome of the Next releases thread. I had assumed there would be patch releases to 2.2, and indeed one would be coming out early Q1. Is this correct? If so, then things seem a little messed-up right now in 2.2-land. There already is a branch-2.2.1, but there hasn't been a release. And branch-2.2 has Maven version 2.2.2-SNAPSHOT. Due to the 2.3 rename a few weeks ago, it might be that the first patch release for 2.2 needs to be 2.2.2. But if so, notice these lists of fixes for 2.2.1: https://issues.apache.org/jira/browse/YARN/fixforversion/12325667 https://issues.apache.org/jira/browse/HDFS/fixforversion/12325666 Do these need to have their fix-versions updated? Raymie P.S. While we're on the subject of point releases, let me check my assumptions. I assumed that, for release x.y.z, fixes deemed to be critical bug fixes would be put into branch-x.y as a matter of course. The Maven release-number in branch-x.y would be x.y.(z+1)-SNAPSHOT, and JIRAs (to be) committed to branch-x.y would have x.y.(z+1) as one of their fix-versions. When enough fixes have accumulated to warrant a release, or when a fix comes up that is critical enough to warrant an immediate release, then branch-x-y is branched to branch-x.y.(z+1), and a release is made. (As Hadoop itself moves from x.y to x.(y+1) and then x.(y+2), the threshold for what is considered to be a critical bug would naturally start to rise, as the effort of back-porting goes up.) Do I have it right? -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Edit permissions for my Hadoop wiki account
Thanks! On Fri, Jan 3, 2014 at 5:06 PM, Arpit Agarwal aagar...@hortonworks.comwrote: Thanks Eli. On Fri, Jan 3, 2014 at 5:01 PM, Eli Collins e...@cloudera.com wrote: Added you both. On Fri, Jan 3, 2014 at 4:58 PM, Arpit Agarwal aagar...@hortonworks.com wrote: Could some kind admin do the same for my account too? My Hadoop wiki username is ArpitAgarwal Thanks! On Fri, Jan 3, 2014 at 4:54 PM, Andrew Wang andrew.w...@cloudera.com wrote: Hi all, Could someone give my wiki account edit permissions? Username is AndrewWang. Thanks, Andrew -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Video: how to commit a patch to hadoop
Great On Sat, Jan 4, 2014 at 9:12 AM, Jun Ping Du j...@vmware.com wrote: Thank you Steve! I think we can put the video link on Hadoop wiki and update it if anything change in future. Thoughts? Thanks, Junping - Original Message - From: Steve Loughran ste...@hortonworks.com To: common-dev@hadoop.apache.org Sent: Friday, January 3, 2014 8:07:25 PM Subject: Video: how to commit a patch to hadoop For new committers and for the curious, I've just stuck up a screen capture with commentary on how to commit a patch to the Hadoop SVN repository https://urldefense.proofpoint.com/v1/url?u=http://youtu.be/txW3m7qWdzwk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=Mw3izENeqbMFnOzHo594vQ%3D%3D%0Am=dQmz4YgDBYrEohn2%2Bh4yof8U%2F0DcsSvkf7PO9Y0z4U4%3D%0As=b4775e4262718d49f385d7ad4b8205b5c646c89b25d991e3fb8fee8a57ca6b5b It's 27 minutes long, not just because of the commentary but because you have to be as rigorous committing a one-line patch as you do to a whole new module. I'd love to see how the git+gerrit projects work (like accumulo) to see if we could speed up both the review and the checkin. -steve -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You -- Best Regards, Haosdent Huang