[jira] [Resolved] (HADOOP-10034) optimize same-filesystem symlinks by doing resolution server-side

2013-10-18 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HADOOP-10034.
--

Resolution: Duplicate

Dupe of HDFS-932

 optimize same-filesystem symlinks by doing resolution server-side
 -

 Key: HADOOP-10034
 URL: https://issues.apache.org/jira/browse/HADOOP-10034
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Colin Patrick McCabe

 We should optimize same-filesystem symlinks by doing resolution server-side 
 rather than client side, as discussed on HADOOP-9780.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


moving up the release versions in JIRA

2013-10-18 Thread Steve Loughran
I've just moved the version 2.2.0 to the released category  for
HADOOP,COMMON and MAPREDUCE , ~6 JIRAs marked as fix for 2.2.0 went into
2.2.1

I can't do it for YARN -someone with admin rights there will have to do it
-it'd be good to have it consistent before the issues start arriving.

Remember: bug reports are a metric of popularity :)

-steve

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Build failed in Jenkins: Hadoop-Common-trunk #925

2013-10-18 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/925/changes

Changes:

[cnauroth] HDFS-5365. Fix libhdfs compile error on FreeBSD9. Contributed by 
Radim Kolar.

[jeagles] HDFS-4511. Cover package org.apache.hadoop.hdfs.tools with unit test 
(Andrey Klochkov via jeagles)

[suresh] HDFS-5374. Remove deadcode in DFSOutputStream. Contributed by Suresh 
Srinivas.

[cnauroth] HADOOP-10055. FileSystemShell.apt.vm doc has typo numRepicas. 
Contributed by Akira Ajisaka.

[cnauroth] HDFS-5336. DataNode should not output StartupProgress metrics. 
Contributed by Akira Ajisaka.

[jing9] HDFS-5379. Update links to datanode information in dfshealth.html. 
Contributed by Haohui Mai.

[cnauroth] HDFS-5375. hdfs.cmd does not expose several snapshot commands. 
Contributed by Chris Nauroth.

--
[...truncated 56861 lines...]
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: test.exclude.pattern - _
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: java.security.egd - file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version - 1.7.4
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: build.platform - Linux-i386-32
Setting project property: protobuf.version - 2.5.0
Setting project property: failIfNoTests - false
Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version - 1.9
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/classes
Setting project property: project.build.testOutputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-classes
Setting project property: project.build.sourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/src/main/java
Setting project 

[jira] [Created] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2013-10-18 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10059:
---

 Summary: RPC authentication and authorization metrics overflow to 
negative values on busy clusters
 Key: HADOOP-10059
 URL: https://issues.apache.org/jira/browse/HADOOP-10059
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.2.0, 0.23.9
Reporter: Jason Lowe
Priority: Minor


The RPC metrics for authorization and authentication successes can easily 
overflow to negative values on a busy cluster that has been up for a long time. 
 We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [VOTE] Merge HDFS-4949 to trunk

2013-10-18 Thread Chris Nauroth
I agree that the code has reached a stable point.  Colin and Andrew, thank
you for your contributions and collaboration.

Throughout development, I've watched the feature grow by running daily
builds in a pseudo-distributed deployment.  As of this week, the full
feature set is working end-to-end.  I also think we've reached a point of
API stability for clients who want to control caching programmatically.

There are several things that I'd like to see completed before the merge as
pre-requisites:

- HDFS-5203: Concurrent clients that add a cache directive on the same path
may prematurely uncache from each other.
- HDFS-5385: Caching RPCs are AtMostOnce, but do not persist client ID and
call ID to edit log.
- HDFS-5386: Add feature documentation for datanode caching.
- Standard clean-ups to satisfy Jenkins pre-commit on the merge patch.
 (For example, I know we've introduced some Javadoc warnings.)
- Full test suite run on Windows.  (The feature is not yet implemented on
Windows.  This is just intended to catch regressions.)
- Test plan posted to HDFS-4949, similar in scope to the snapshot test plan
that was posted to HDFS-2802.  For my own part, I've run the new unit
tests, and I've tested end-to-end in a pseudo-distributed deployment.  It's
unlikely that I'll get a chance to test fully distributed before the vote
closes, so I'm curious to hear if you've done this on your side yet.

Also, I want to confirm that this vote only covers trunk.  I don't see
branch-2 mentioned, so I assume that we're not voting on merge to branch-2
yet.

Before I cast my vote, can you please discuss whether or not it's feasible
to complete all of the above in the next 7 days?  For the issues assigned
to me, I do expect to complete them.

Thanks again for all of your hard work!

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Oct 17, 2013 at 3:07 PM, Colin McCabe cmcc...@alumni.cmu.eduwrote:

 +1.  Thanks, guys.

 best,
 Colin

 On Thu, Oct 17, 2013 at 3:01 PM, Andrew Wang andrew.w...@cloudera.com
 wrote:
  Hello all,
 
  I'd like to call a vote to merge the HDFS-4949 branch (in-memory caching)
  to trunk. Colin McCabe and I have been hard at work the last 3.5 months
  implementing this feature, and feel that it's reached a level of
 stability
  and utility where it's ready for broader testing and integration.
 
  I'd also like to thank Chris Nauroth at Hortonworks for code reviews and
  bug fixes, and everyone who's reviewed the HDFS-4949 design doc and left
  comments.
 
  Obviously, I am +1 for the merge. The vote will run the standard 7 days,
  closing on October 24 at 11:59PM.
 
  Thanks,
  Andrew


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [VOTE] Merge HDFS-4949 to trunk

2013-10-18 Thread Colin McCabe
Hi Chris,

I think it's feasible to complete those tasks in the next 7 days.
Andrew is on HDFS-5386.

The test plan document is a great idea.  We'll try to get that up
early next week.  We have a lot of unit tests now, clearly, but some
manual testing is important too.

If we discover any issues during testing, then we can push out the
merge timeframe.  For example, one area that probably needs more
testing is caching+federation.

I would like to get HDFS-5378 and HDFS-5366 in as well.

The other subtasks are nice to have but not really critical, and I
think it would be just as easy to do them in trunk.  We're hoping that
having this in trunk will make it easier for us to collaborate on
HDFS-2832 and other ongoing work.

 Also, I want to confirm that this vote only covers trunk.
 I don't see branch-2 mentioned, so I assume that we're
 not voting on merge to branch-2 yet.

Yeah, this vote is only to merge to trunk.

cheers.
Colin

On Fri, Oct 18, 2013 at 10:48 AM, Chris Nauroth
cnaur...@hortonworks.com wrote:
 I agree that the code has reached a stable point.  Colin and Andrew, thank
 you for your contributions and collaboration.

 Throughout development, I've watched the feature grow by running daily
 builds in a pseudo-distributed deployment.  As of this week, the full
 feature set is working end-to-end.  I also think we've reached a point of
 API stability for clients who want to control caching programmatically.

 There are several things that I'd like to see completed before the merge as
 pre-requisites:

 - HDFS-5203: Concurrent clients that add a cache directive on the same path
 may prematurely uncache from each other.
 - HDFS-5385: Caching RPCs are AtMostOnce, but do not persist client ID and
 call ID to edit log.
 - HDFS-5386: Add feature documentation for datanode caching.
 - Standard clean-ups to satisfy Jenkins pre-commit on the merge patch.
  (For example, I know we've introduced some Javadoc warnings.)
 - Full test suite run on Windows.  (The feature is not yet implemented on
 Windows.  This is just intended to catch regressions.)
 - Test plan posted to HDFS-4949, similar in scope to the snapshot test plan
 that was posted to HDFS-2802.  For my own part, I've run the new unit
 tests, and I've tested end-to-end in a pseudo-distributed deployment.  It's
 unlikely that I'll get a chance to test fully distributed before the vote
 closes, so I'm curious to hear if you've done this on your side yet.

 Also, I want to confirm that this vote only covers trunk.  I don't see
 branch-2 mentioned, so I assume that we're not voting on merge to branch-2
 yet.

 Before I cast my vote, can you please discuss whether or not it's feasible
 to complete all of the above in the next 7 days?  For the issues assigned
 to me, I do expect to complete them.

 Thanks again for all of your hard work!

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/



 On Thu, Oct 17, 2013 at 3:07 PM, Colin McCabe cmcc...@alumni.cmu.eduwrote:

 +1.  Thanks, guys.

 best,
 Colin

 On Thu, Oct 17, 2013 at 3:01 PM, Andrew Wang andrew.w...@cloudera.com
 wrote:
  Hello all,
 
  I'd like to call a vote to merge the HDFS-4949 branch (in-memory caching)
  to trunk. Colin McCabe and I have been hard at work the last 3.5 months
  implementing this feature, and feel that it's reached a level of
 stability
  and utility where it's ready for broader testing and integration.
 
  I'd also like to thank Chris Nauroth at Hortonworks for code reviews and
  bug fixes, and everyone who's reviewed the HDFS-4949 design doc and left
  comments.
 
  Obviously, I am +1 for the merge. The vote will run the standard 7 days,
  closing on October 24 at 11:59PM.
 
  Thanks,
  Andrew


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


[jira] [Reopened] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-10-18 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-9652:
--

  Assignee: Colin Patrick McCabe  (was: Andrew Wang)

 RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
 -

 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.3.0

 Attachments: 0001-temporarily-disable-HADOOP-9652.patch, 
 hadoop-9452-1.patch, hadoop-9652-2.patch, hadoop-9652-3.patch, 
 hadoop-9652-4.patch, hadoop-9652-5.patch, hadoop-9652-6.patch


 {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
 the symlink, but instead uses the owner and mode of the symlink target.  If 
 the target can't be found, it fills in bogus values (the empty string and 
 FsPermission.getDefault) for these.
 Symlinks have an owner distinct from the owner of the target they point to, 
 and getFileLinkStatus ought to expose this.
 In some operating systems, symlinks can have a permission other than 0777.  
 We ought to expose this in RawLocalFilesystem and other places, although we 
 don't necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [VOTE] Merge HDFS-4949 to trunk

2013-10-18 Thread Chris Nauroth
+1

Sounds great!

Regarding testing caching+federation, this is another thing that I had
intended to pick up as part of HDFS-5149.  I'm not sure if I can get this
done in the next 7 days, so I'll keep you posted.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Fri, Oct 18, 2013 at 11:15 AM, Colin McCabe cmcc...@alumni.cmu.eduwrote:

 Hi Chris,

 I think it's feasible to complete those tasks in the next 7 days.
 Andrew is on HDFS-5386.

 The test plan document is a great idea.  We'll try to get that up
 early next week.  We have a lot of unit tests now, clearly, but some
 manual testing is important too.

 If we discover any issues during testing, then we can push out the
 merge timeframe.  For example, one area that probably needs more
 testing is caching+federation.

 I would like to get HDFS-5378 and HDFS-5366 in as well.

 The other subtasks are nice to have but not really critical, and I
 think it would be just as easy to do them in trunk.  We're hoping that
 having this in trunk will make it easier for us to collaborate on
 HDFS-2832 and other ongoing work.

  Also, I want to confirm that this vote only covers trunk.
  I don't see branch-2 mentioned, so I assume that we're
  not voting on merge to branch-2 yet.

 Yeah, this vote is only to merge to trunk.

 cheers.
 Colin

 On Fri, Oct 18, 2013 at 10:48 AM, Chris Nauroth
 cnaur...@hortonworks.com wrote:
  I agree that the code has reached a stable point.  Colin and Andrew,
 thank
  you for your contributions and collaboration.
 
  Throughout development, I've watched the feature grow by running daily
  builds in a pseudo-distributed deployment.  As of this week, the full
  feature set is working end-to-end.  I also think we've reached a point of
  API stability for clients who want to control caching programmatically.
 
  There are several things that I'd like to see completed before the merge
 as
  pre-requisites:
 
  - HDFS-5203: Concurrent clients that add a cache directive on the same
 path
  may prematurely uncache from each other.
  - HDFS-5385: Caching RPCs are AtMostOnce, but do not persist client ID
 and
  call ID to edit log.
  - HDFS-5386: Add feature documentation for datanode caching.
  - Standard clean-ups to satisfy Jenkins pre-commit on the merge patch.
   (For example, I know we've introduced some Javadoc warnings.)
  - Full test suite run on Windows.  (The feature is not yet implemented on
  Windows.  This is just intended to catch regressions.)
  - Test plan posted to HDFS-4949, similar in scope to the snapshot test
 plan
  that was posted to HDFS-2802.  For my own part, I've run the new unit
  tests, and I've tested end-to-end in a pseudo-distributed deployment.
  It's
  unlikely that I'll get a chance to test fully distributed before the vote
  closes, so I'm curious to hear if you've done this on your side yet.
 
  Also, I want to confirm that this vote only covers trunk.  I don't see
  branch-2 mentioned, so I assume that we're not voting on merge to
 branch-2
  yet.
 
  Before I cast my vote, can you please discuss whether or not it's
 feasible
  to complete all of the above in the next 7 days?  For the issues assigned
  to me, I do expect to complete them.
 
  Thanks again for all of your hard work!
 
  Chris Nauroth
  Hortonworks
  http://hortonworks.com/
 
 
 
  On Thu, Oct 17, 2013 at 3:07 PM, Colin McCabe cmcc...@alumni.cmu.edu
 wrote:
 
  +1.  Thanks, guys.
 
  best,
  Colin
 
  On Thu, Oct 17, 2013 at 3:01 PM, Andrew Wang andrew.w...@cloudera.com
  wrote:
   Hello all,
  
   I'd like to call a vote to merge the HDFS-4949 branch (in-memory
 caching)
   to trunk. Colin McCabe and I have been hard at work the last 3.5
 months
   implementing this feature, and feel that it's reached a level of
  stability
   and utility where it's ready for broader testing and integration.
  
   I'd also like to thank Chris Nauroth at Hortonworks for code reviews
 and
   bug fixes, and everyone who's reviewed the HDFS-4949 design doc and
 left
   comments.
  
   Obviously, I am +1 for the merge. The vote will run the standard 7
 days,
   closing on October 24 at 11:59PM.
  
   Thanks,
   Andrew
 
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the 

Managing docs with hadoop-1 hadoop-2

2013-10-18 Thread Arun C Murthy
Folks,

 Currently http://hadoop.apache.org/docs/stable/ points to hadoop-1. With 
hadoop-2 going GA, should we just point that to hadoop-2? 

 Couple of options:
 # Have stable1/stable2 links:
   http://hadoop.apache.org/docs/stable1 - hadoop-1.x
   http://hadoop.apache.org/docs/stable2 - hadoop-2.x

 # Just point stable to hadoop-2 and create something new for hadoop-1:
   http://hadoop.apache.org/docs/hadoop1 - hadoop-1.x
   http://hadoop.apache.org/docs/stable - hadoop-2.x

We have similar requirements for *current* link too.

Thoughts?

thanks,
Arun

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: moving up the release versions in JIRA

2013-10-18 Thread Sandy Ryza
Thanks Steve. Just did the same for YARN.  7 JIRAs with 2.2.0 target
versions moved to 2.2.1.

-Sandy


On Fri, Oct 18, 2013 at 1:56 AM, Steve Loughran ste...@hortonworks.comwrote:

 I've just moved the version 2.2.0 to the released category  for
 HADOOP,COMMON and MAPREDUCE , ~6 JIRAs marked as fix for 2.2.0 went into
 2.2.1

 I can't do it for YARN -someone with admin rights there will have to do it
 -it'd be good to have it consistent before the issues start arriving.

 Remember: bug reports are a metric of popularity :)

 -steve

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



Re: [VOTE] Release Apache Hadoop 2.2.0

2013-10-18 Thread Arun C Murthy
Sorry, I've been away sick and hence the silence.

I just started a discussion on the *-dev@ lists on Managing docs... once we 
all agree, I'll fix the links.

Makes sense?

thanks,
Arun

On Oct 17, 2013, at 11:56 AM, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote:

 Hello,
 
 The current document (http://hadoop.apache.org/docs/current/) is still 
 2.1.0-beta.
 Would you tell me how to update?
 
  common-dev@
 I sent to wrong address 'hadoop-general@'.
 I'm sorry to send the same mail again.
 
 thanks,
 Akira
 
 (2013/10/17 11:44), Akira AJISAKA wrote:
 Hello,
 
 The current document (http://hadoop.apache.org/docs/current/) is still
 2.1.0-beta.
 Would you tell me how to update?
 
 thanks,
 Akira
 
 (2013/10/16 13:34), Akira AJISAKA wrote:
 Congrats!
 
 The current document (http://hadoop.apache.org/docs/current/) is now
 hadoop-2.1.0-beta. I want someone to update.
 
 thanks,
 Akira
 
 (2013/10/15 21:35), Arun C Murthy wrote:
 With 31 +1s (15 binding) and no -1s the vote passes.
 
 Congratulations to all, Hadoop 2 is now GA!
 
 thanks,
 Arun
 
 On Oct 7, 2013, at 12:00 AM, Arun C Murthy a...@hortonworks.com wrote:
 
 Folks,
 
 I've created a release candidate (rc0) for hadoop-2.2.0 that I would
 like to get released - this release fixes a small number of bugs and
 some protocol/api issues which should ensure they are now stable and
 will not change in hadoop-2.x.
 
 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.2.0-rc0
 The RC tag in svn is here:
 http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.2.0-rc0
 
 The maven artifacts are available via repository.apache.org.
 
 Please try the release and vote; the vote will run for the usual 7
 days.
 
 thanks,
 Arun
 
 P.S.: Thanks to Colin, Andrew, Daryn, Chris and others for helping
 nail down the symlinks-related issues. I'll release note the fact
 that we have disabled it in 2.2. Also, thanks to Vinod for some
 heavy-lifting on the YARN side in the last couple of weeks.
 
 
 
 
 
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 
 
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 
 
 
 
 

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Managing docs with hadoop-1 hadoop-2

2013-10-18 Thread Eli Collins
On Fri, Oct 18, 2013 at 2:10 PM, Arun C Murthy a...@hortonworks.com wrote:

 Folks,

  Currently http://hadoop.apache.org/docs/stable/ points to hadoop-1. With
 hadoop-2 going GA, should we just point that to hadoop-2?

  Couple of options:
  # Have stable1/stable2 links:
http://hadoop.apache.org/docs/stable1 - hadoop-1.x
http://hadoop.apache.org/docs/stable2 - hadoop-2.x


+1,   would also make:
 current - stable2(since v2 is the latest)
 stable - stable1 (for compatibility)

Thanks,
Eli


 # Just point stable to hadoop-2 and create something new for hadoop-1:
http://hadoop.apache.org/docs/hadoop1 - hadoop-1.x
http://hadoop.apache.org/docs/stable - hadoop-2.x

 We have similar requirements for *current* link too.

 Thoughts?

 thanks,
 Arun

 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/



 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



Re: Managing docs with hadoop-1 hadoop-2

2013-10-18 Thread Roman Shaposhnik
On Fri, Oct 18, 2013 at 2:10 PM, Arun C Murthy a...@hortonworks.com wrote:
 Folks,

  Currently http://hadoop.apache.org/docs/stable/ points to hadoop-1. With 
 hadoop-2 going GA, should we just point that to hadoop-2?

  Couple of options:
  # Have stable1/stable2 links:
http://hadoop.apache.org/docs/stable1 - hadoop-1.x
http://hadoop.apache.org/docs/stable2 - hadoop-2.x

+1 to this option.

Thanks,
Roman.


Re: Managing docs with hadoop-1 hadoop-2

2013-10-18 Thread Tsuyoshi OZAWA
  # Have stable1/stable2 links:
   http://hadoop.apache.org/docs/stable1 - hadoop-1.x
   http://hadoop.apache.org/docs/stable2 - hadoop-2.x
 +1,   would also make:
 current - stable2(since v2 is the latest)
 stable - stable1 (for compatibility)

+1, Eli's proposal looks good to me.

- Tsuyoshi


Re: [VOTE] Release Apache Hadoop 2.2.0

2013-10-18 Thread Akira AJISAKA

Thanks!

(2013/10/18 14:12), Arun C Murthy wrote:

Sorry, I've been away sick and hence the silence.

I just started a discussion on the *-dev@ lists on Managing docs... once we 
all agree, I'll fix the links.

Makes sense?

thanks,
Arun

On Oct 17, 2013, at 11:56 AM, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote:


Hello,

The current document (http://hadoop.apache.org/docs/current/) is still 
2.1.0-beta.
Would you tell me how to update?


common-dev@

I sent to wrong address 'hadoop-general@'.
I'm sorry to send the same mail again.

thanks,
Akira

(2013/10/17 11:44), Akira AJISAKA wrote:

Hello,

The current document (http://hadoop.apache.org/docs/current/) is still
2.1.0-beta.
Would you tell me how to update?

thanks,
Akira

(2013/10/16 13:34), Akira AJISAKA wrote:

Congrats!

The current document (http://hadoop.apache.org/docs/current/) is now
hadoop-2.1.0-beta. I want someone to update.

thanks,
Akira

(2013/10/15 21:35), Arun C Murthy wrote:

With 31 +1s (15 binding) and no -1s the vote passes.

Congratulations to all, Hadoop 2 is now GA!

thanks,
Arun

On Oct 7, 2013, at 12:00 AM, Arun C Murthy a...@hortonworks.com wrote:


Folks,

I've created a release candidate (rc0) for hadoop-2.2.0 that I would
like to get released - this release fixes a small number of bugs and
some protocol/api issues which should ensure they are now stable and
will not change in hadoop-2.x.

The RC is available at:
http://people.apache.org/~acmurthy/hadoop-2.2.0-rc0
The RC tag in svn is here:
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.2.0-rc0

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7
days.

thanks,
Arun

P.S.: Thanks to Colin, Andrew, Daryn, Chris and others for helping
nail down the symlinks-related issues. I'll release note the fact
that we have disabled it in 2.2. Also, thanks to Vinod for some
heavy-lifting on the YARN side in the last couple of weeks.





--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/











--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/