[jira] [Created] (HADOOP-9644) Hadoop 1.2 RPM build missing task-log4j.properties

2013-06-14 Thread Blake Williams (JIRA)
Blake Williams created HADOOP-9644:
--

 Summary: Hadoop 1.2 RPM build missing task-log4j.properties
 Key: HADOOP-9644
 URL: https://issues.apache.org/jira/browse/HADOOP-9644
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
 Environment: CentOS 5.9 64-bit, updated version of libtool
Reporter: Blake Williams
Priority: Trivial


When running ant rpm, I receive the following error:

{noformat} 
  [rpm] + '[' /tmp/hadoop_package_build_root/BUILD/etc/hadoop '!=' 
/tmp/hadoop_package_build_root/BUILD//usr/conf ']'
  [rpm] + rm -rf /tmp/hadoop_package_build_root/BUILD//usr/etc
  [rpm] + /usr/lib/rpm/redhat/brp-compress
  [rpm] + /usr/lib/rpm/redhat/brp-strip /usr/bin/strip
  [rpm] + /usr/lib/rpm/redhat/brp-strip-static-archive /usr/bin/strip
  [rpm] + /usr/lib/rpm/redhat/brp-strip-comment-note /usr/bin/strip 
/usr/bin/objdump
  [rpm] + /usr/lib/rpm/brp-python-bytecompile
  [rpm] Processing files: hadoop-1.2.0-1
  [rpm] warning: File listed twice: /usr/libexec
  [rpm] warning: File listed twice: /usr/libexec/hadoop-config.sh
  [rpm] warning: File listed twice: /usr/libexec/jsvc.amd64
  [rpm] Checking for unpackaged file(s): /usr/lib/rpm/check-files 
/tmp/hadoop_package_build_root/BUILD
  [rpm] error: Installed (but unpackaged) file(s) found:
  [rpm]/etc/hadoop/task-log4j.properties
  [rpm] File listed twice: /usr/libexec
  [rpm] File listed twice: /usr/libexec/hadoop-config.sh
  [rpm] File listed twice: /usr/libexec/jsvc.amd64
  [rpm] Installed (but unpackaged) file(s) found:
  [rpm]/etc/hadoop/task-log4j.properties
  [rpm] 
  [rpm] 
  [rpm] RPM build errors:

BUILD FAILED
/root/hadoop-1.2.0/build.xml:1887: '/usr/bin/rpmbuild' failed with exit code 1
{noformat}

The following patch fixes the issue:
{noformat}
*** /dev/null2013-06-14 15:27:11.0 +1000
--- src/packages/rpm/spec/hadoop.spec   2013-06-14 15:31:46.0 +1000
***
*** 194,199 
--- 194,200 
  %config(noreplace) %{_conf_dir}/ssl-server.xml.example
  %config(noreplace) %{_conf_dir}/taskcontroller.cfg
  %config(noreplace) %{_conf_dir}/fair-scheduler.xml
+ %config(noreplace) %{_conf_dir}/task-log4j.properties
  %{_prefix}
  %attr(0755,root,root) %{_prefix}/libexec
  %attr(0755,root,root) /etc/rc.d/init.d
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Common-trunk #799

2013-06-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/799/changes

Changes:

[acmurthy] MAPREDUCE-5319. Set user.name in job.xml. Contributed by Xuan Gong.

[acmurthy] YARN-812. Set default logger for application summary logger to 
hadoop.root.logger. Contributed by Siddarth Seth.

[vinodkv] YARN-792. Moved NodeHealthStatus from yarn.api.record to 
yarn.server.api.record. Contributed by Jian He.

[szetszwo] HDFS-4845. FSNamesystem.deleteInternal should acquire write-lock 
before changing the inode map.  Contributed by Arpit Agarwal

[vinodkv] YARN-692. Creating NMToken master key on RM and sharing it with NM as 
a part of RM-NM heartbeat. Contributed by Omkar Vinit Joshi.

[vinodkv] YARN-773. Moved YarnRuntimeException from package api.yarn to 
api.yarn.exceptions. Contributed by Jian He.

[jlowe] MAPREDUCE-4019. -list-attempt-ids is not working. Contributed by Ashwin 
Shankar, Devaraj K, and B Anil Kumar

[vinodkv] MAPREDUCE-5199. Removing ApplicationTokens file as it is no longer 
needed. Contributed by Daryn Sharp.

[jing9] HDFS-4902. DFSClient#getSnapshotDiffReport should use string path 
rather than o.a.h.fs.Path. Contributed by Binglin Chang.

[vinodkv] YARN-746. Renamed Service.register() and Service.unregister() to 
registerServiceListener()  unregisterServiceListener() respectively. 
Contributed by Steve Loughran.

[vinodkv] YARN-530. Defined Service model strictly, implemented AbstractService 
for robust subclassing and migrated yarn-common services. Contributed by Steve 
Loughran.
YARN-117. Migrated rest of YARN to the new service model. Contributed by Steve 
Louhran.
MAPREDUCE-5298. Moved MapReduce services to YARN-530 stricter lifecycle. 
Contributed by Steve Loughran.

--
[...truncated 50160 lines...]
Adding reference: maven.compile.classpath
Adding reference: maven.runtime.classpath
Adding reference: maven.test.classpath
Adding reference: maven.plugin.classpath
Adding reference: maven.project
Adding reference: maven.project.helper
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: test.exclude.pattern - _
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: build.platform - Linux-i386-32
Setting project property: failIfNoTests - false
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting 

[jira] [Created] (HADOOP-9645) KerberosAuthenticator NPEs on connect error

2013-06-14 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-9645:
---

 Summary: KerberosAuthenticator NPEs on connect error
 Key: HADOOP-9645
 URL: https://issues.apache.org/jira/browse/HADOOP-9645
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.5-alpha
Reporter: Daryn Sharp
Priority: Critical


A NPE occurs if there's a kerberos error during initial connect.  In this case, 
the NN was using a HTTP service principal with a stale kvno.  It causes webhdfs 
to fail in a non-user friendly manner by masking the real error from the user.

{noformat}
java.lang.RuntimeException: java.lang.NullPointerException
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1241)
at
sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2713)
at
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:477)
at
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.isNegotiate(KerberosAuthenticator.java:164)
at
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:140)
at
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
at
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.openHttpUrlConnection(WebHdfsFileSystem.java:364)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [DISCUSS] Ensuring Consistent Behavior for Alternative Hadoop FileSystems + Workshop

2013-06-14 Thread Stephen Watt
This is a good point Andrew. The hangout was actually the first time I'd heard 
about the AbstractFileSystem class. I've been doing some further analysis on 
the source in Hadoop 2.0 and when I look at the Hadoop 2.0 implementation of 
DistributedFileSystem and LocalFileSystem class they extend the FileSystem 
class and not AbstractFileSystem. I would imagine if the plan for Hadoop 2.0 is 
to build FileSystem implementations using the AbstractFileSystem, then those 
two would use it, so I'm a bit confused.

Perhaps I'm looking in the wrong place? Sanjay (or anyone else), could you 
clarify this for us?

Regards
Steve Watt

- Original Message -
From: Andrew Wang andrew.w...@cloudera.com
To: common-dev@hadoop.apache.org
Cc: mbhandar...@gopivotal.com, shv hadoop shv.had...@gmail.com, 
ste...@hortonworks.com, erlv5...@gmail.com, shaposh...@gmail.com, 
apurt...@apache.org, cdoug...@apache.org, jayh...@cs.ucsc.edu, 
san...@hortonworks.com
Sent: Monday, June 10, 2013 5:14:16 PM
Subject: Re: [DISCUSS] Ensuring Consistent Behavior for Alternative Hadoop 
FileSystems + Workshop

Thanks for the summary Steve, very useful.

I'm wondering a bit about the point on testing AbstractFileSystem rather
than FileSystem. While these are both wrappers for DFSClient, they're
pretty different in terms of the APIs they expose. Furthermore, AFS is not
actually a client-facing API; clients interact with an AFS through
FileContext.

I ask because I did some work trying to unify the symlink tests for both
FileContext and FileSystem (HADOOP-9370 and HADOOP-9355). Subtle things
like the default mkdir semantics are different; you can see some of the
contortions in HADOOP-9370. I ultimately ended up just adhering to the
FileContext-style behavior, but as a result I'm not really testing some
parts of FileSystem.

Are we going to end up with two different sets of validation tests? Or just
choose one API over the other? FileSystem is supposed to eventually be
deprecated in favor of FileContext (HADOOP-6446, filed in 2009), but actual
uptake in practice has been slow.

Best,
Andrew


On Mon, Jun 10, 2013 at 1:49 PM, Stephen Watt sw...@redhat.com wrote:

 For those interested - I posted a recap of this mornings Google Hangout on
 the Wiki Page at https://wiki.apache.org/hadoop/HCFS/Progress

 On Jun 5, 2013, at 8:14 PM, Stephen Watt wrote:

  Hi Folks
 
  Per Roman's recommendation I've created a Wiki Page for organizing the
 work and managing the logistics -
 https://wiki.apache.org/hadoop/HCFS/Progress
 
  I'd like to propose a Google Hangout at 9am PST on Monday June 10th to
 get together and discuss the initiative. Please respond back to me if
 you're interested or would like to propose a different time. I'll update
 our Wiki page with the logistics.
 
  Regards
  Steve Watt
 
  - Original Message -
  From: Roman Shaposhnik shaposh...@gmail.com
  To: Stephen Watt sw...@redhat.com
  Cc: common-dev@hadoop.apache.org, mbhandar...@gopivotal.com, shv
 hadoop shv.had...@gmail.com, ste...@hortonworks.com, erlv5...@gmail.com,
 apurt...@apache.org
  Sent: Friday, May 31, 2013 5:28:58 PM
  Subject: Re: [DISCUSS] Ensuring Consistent Behavior for Alternative
 Hadoop FileSystems + Workshop
 
  On Fri, May 31, 2013 at 1:00 PM, Stephen Watt sw...@redhat.com wrote:
  What is the protocol for organizing the logistics and collaborating? I
 am loathe to flood common-dev with does this time work for you? emails
 from the interested parties. Do we create a high level JIRA ticket and
 collaborate and post comments and G+ meetup times on that ? Another option
 might be the Wiki, I'd be happy to be responsible with tracking progress on
 https://wiki.apache.org/hadoop/HCFS/Progress until we are able to break
 initiatives down into more granular JIRA tickets.
 
  I'd go with a wiki page and perhaps http://www.doodle.com/
 
  After we've had a few G+ hangouts, for those that would like to meet
 face to face, I have also made an all day reservation for a meeting room
 that can hold up to 20 people at our Red Hat Office in Castro Street,
 Mountain View on Tuesday June 25th (the day before Hadoop Summit and a
 short drive away). We don't have to use the whole day, but it gives us some
 flexibility around the availability of interested parties. I was thinking
 something along the lines of 10am - 3pm. We are happy to cater lunch.
 
  That also would be very much appreciated!
 
  Thanks,
  Roman.



[jira] [Created] (HADOOP-9646) Inconsistent exception specifications in FileUtils#chmod

2013-06-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9646:


 Summary: Inconsistent exception specifications in FileUtils#chmod
 Key: HADOOP-9646
 URL: https://issues.apache.org/jira/browse/HADOOP-9646
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


There are two FileUtils#chmod methods:

{code}
public static int chmod(String filename, String perm
  ) throws IOException, InterruptedException;
public static int chmod(String filename, String perm, boolean recursive)
throws IOException;
{code}

The first one just calls the second one with {{recursive = false}}, but despite 
that it is declared as throwing {{InterruptedException}}, something the second 
one doesn't call.

The new Java7 chmod API, which we will transition to once JDK6 support is 
dropped, does *not* throw {{InterruptedException}}

See 
[http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setOwner(java.nio.file.Path,
 java.nio.file.attribute.UserPrincipal)]

So we should make these consistent by removing the {{InterruptedException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [DISCUSS] Ensuring Consistent Behavior for Alternative Hadoop FileSystems + Workshop

2013-06-14 Thread Andrew Wang
Hey Steve,

I agree that it's confusing. FileSystem and FileContext are essentially two
parallel sets of interfaces for accessing filesystems in Hadoop.
FileContext splits the interface and shared code with AbstractFileSystem,
while FileSystem is all-in-one. If you're looking for the AFS equivalents
to DistributedFileSystem and LocalFileSystem, see Hdfs and LocalFs.

Realistically, FileSystem isn't going to be deprecated and removed any time
soon. There are lots of 3rd-party FileSystem implementations, and most apps
today use FileSystem (including many HDFS internals, like trash and the
shell).

When I read the wiki page, I figured that the mention of AFS was
essentially a typo, since everyone's been steaming ahead with FileSystem.
Standardizing FileSystem makes total sense to me, I just wanted to confirm
that plan.

Best,
Andrew


On Fri, Jun 14, 2013 at 9:38 AM, Stephen Watt sw...@redhat.com wrote:

 This is a good point Andrew. The hangout was actually the first time I'd
 heard about the AbstractFileSystem class. I've been doing some further
 analysis on the source in Hadoop 2.0 and when I look at the Hadoop 2.0
 implementation of DistributedFileSystem and LocalFileSystem class they
 extend the FileSystem class and not AbstractFileSystem. I would imagine if
 the plan for Hadoop 2.0 is to build FileSystem implementations using the
 AbstractFileSystem, then those two would use it, so I'm a bit confused.

 Perhaps I'm looking in the wrong place? Sanjay (or anyone else), could you
 clarify this for us?

 Regards
 Steve Watt

 - Original Message -
 From: Andrew Wang andrew.w...@cloudera.com
 To: common-dev@hadoop.apache.org
 Cc: mbhandar...@gopivotal.com, shv hadoop shv.had...@gmail.com,
 ste...@hortonworks.com, erlv5...@gmail.com, shaposh...@gmail.com,
 apurt...@apache.org, cdoug...@apache.org, jayh...@cs.ucsc.edu,
 san...@hortonworks.com
 Sent: Monday, June 10, 2013 5:14:16 PM
 Subject: Re: [DISCUSS] Ensuring Consistent Behavior for Alternative Hadoop
 FileSystems + Workshop

 Thanks for the summary Steve, very useful.

 I'm wondering a bit about the point on testing AbstractFileSystem rather
 than FileSystem. While these are both wrappers for DFSClient, they're
 pretty different in terms of the APIs they expose. Furthermore, AFS is not
 actually a client-facing API; clients interact with an AFS through
 FileContext.

 I ask because I did some work trying to unify the symlink tests for both
 FileContext and FileSystem (HADOOP-9370 and HADOOP-9355). Subtle things
 like the default mkdir semantics are different; you can see some of the
 contortions in HADOOP-9370. I ultimately ended up just adhering to the
 FileContext-style behavior, but as a result I'm not really testing some
 parts of FileSystem.

 Are we going to end up with two different sets of validation tests? Or just
 choose one API over the other? FileSystem is supposed to eventually be
 deprecated in favor of FileContext (HADOOP-6446, filed in 2009), but actual
 uptake in practice has been slow.

 Best,
 Andrew


 On Mon, Jun 10, 2013 at 1:49 PM, Stephen Watt sw...@redhat.com wrote:

  For those interested - I posted a recap of this mornings Google Hangout
 on
  the Wiki Page at https://wiki.apache.org/hadoop/HCFS/Progress
 
  On Jun 5, 2013, at 8:14 PM, Stephen Watt wrote:
 
   Hi Folks
  
   Per Roman's recommendation I've created a Wiki Page for organizing the
  work and managing the logistics -
  https://wiki.apache.org/hadoop/HCFS/Progress
  
   I'd like to propose a Google Hangout at 9am PST on Monday June 10th to
  get together and discuss the initiative. Please respond back to me if
  you're interested or would like to propose a different time. I'll update
  our Wiki page with the logistics.
  
   Regards
   Steve Watt
  
   - Original Message -
   From: Roman Shaposhnik shaposh...@gmail.com
   To: Stephen Watt sw...@redhat.com
   Cc: common-dev@hadoop.apache.org, mbhandar...@gopivotal.com, shv
  hadoop shv.had...@gmail.com, ste...@hortonworks.com,
 erlv5...@gmail.com,
  apurt...@apache.org
   Sent: Friday, May 31, 2013 5:28:58 PM
   Subject: Re: [DISCUSS] Ensuring Consistent Behavior for Alternative
  Hadoop FileSystems + Workshop
  
   On Fri, May 31, 2013 at 1:00 PM, Stephen Watt sw...@redhat.com
 wrote:
   What is the protocol for organizing the logistics and collaborating? I
  am loathe to flood common-dev with does this time work for you? emails
  from the interested parties. Do we create a high level JIRA ticket and
  collaborate and post comments and G+ meetup times on that ? Another
 option
  might be the Wiki, I'd be happy to be responsible with tracking progress
 on
  https://wiki.apache.org/hadoop/HCFS/Progress until we are able to break
  initiatives down into more granular JIRA tickets.
  
   I'd go with a wiki page and perhaps http://www.doodle.com/
  
   After we've had a few G+ hangouts, for those that would like to meet
  face to face, I have also made an all day reservation for a meeting room
  that can 

Re: Heads up: branch-2.1-beta

2013-06-14 Thread Arun C Murthy
As Ramya noted, things are looking good on branch-2.1-beta ATM.

Henceforth, can I please ask committers to hold off non-blocker fixes for the 
final set of tests?

thanks,
Arun

On Jun 4, 2013, at 8:32 AM, Arun C Murthy wrote:

 Folks,
 
 The vast majority of of the planned features and API work is complete, thanks 
 to everyone who contributed!
 
 I've created a branch-2.1-beta branch from which I anticipate I can make the 
 first of our beta releases very shortly.
 
 For now the remaining work is to wrap up loose ends i.e. last minute api work 
 (e.g. YARN-759 showed up last night for consideration), bug-fixes etc.; then 
 run this through a battery of unit/system/integration tests and do a final 
 review before we ship. There is more work remaining on documentation (e.g. 
 HADOOP-9517) and I plan to personally focus on it this week - obviously help 
 reviewing docs is very welcome.
 
 Committers, from now, please please exercise your judgement on where you 
 commit. Typically, features should go into branch-2 with 2.3.0 as the version 
 on jira (fix-version 2.3.0 is ready). The expectation is that 2.2.0 will be 
 limited to content in branch-2.1-beta and we stick to stabilizing it 
 henceforth (I've deliberately not created 2.2.0 fix-version on jira yet).
 
 thanks,
 Arun

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




Re: Heads up: branch-2.1-beta

2013-06-14 Thread Alejandro Abdelnur
Arun,

This sounds great. Following is the list of JIRAs I'd like to get in. Note
that the are ready or almost ready, my estimate is that they can be taken
care of in a couple of day.

Thanks.

* YARN-752: In AMRMClient, automatically add corresponding rack requests
for requested nodes

impact: behavior change

status: patch avail, reviewed by Bikas. As Bikas did some changes it needs
another committer to look at it.

* YARN-521: Augment AM - RM client module to be able to request containers
only at specific locations

impact: AMRM client API change

status: patch avail, needs to be reviewed, needs YARN-752

* YARN-791: Ensure that RM RPC APIs that return nodes are consistent with
/nodes REST API

impact: Yarn client API  proto change

status: patch avail, review in progress

* YARN-649: Make container logs available over HTTP in plain text

impact: Addition to NM HTTP REST API. Needed for MAPREDUCE-4362 (which does
not change API)

status: patch avail, review in progress

* MAPREDUCE-5171: Expose blacklisted nodes from the MR AM REST API

impact: Addition to MRAM HTTP API

status: patch avail, +1ed, needs to be committed

* MAPREDUCE-5130: Add missing job config options to mapred-default.xml

impact: behavior change

status: patch avail, needs to be reviewed

* MAPREDUCE-5311: Remove slot millis computation logic and deprecate
counter constants

impact: behavior change

status: patch avail, needs to be reviewed

* YARN-787: Remove resource min from Yarn client API

impact: Yarn client API change

status: patch needs rebase, depends on MAPREDUCE-5311




On Fri, Jun 14, 2013 at 1:17 PM, Arun C Murthy a...@hortonworks.com wrote:

 As Ramya noted, things are looking good on branch-2.1-beta ATM.

 Henceforth, can I please ask committers to hold off non-blocker fixes for
 the final set of tests?

 thanks,
 Arun

 On Jun 4, 2013, at 8:32 AM, Arun C Murthy wrote:

  Folks,
 
  The vast majority of of the planned features and API work is complete,
 thanks to everyone who contributed!
 
  I've created a branch-2.1-beta branch from which I anticipate I can make
 the first of our beta releases very shortly.
 
  For now the remaining work is to wrap up loose ends i.e. last minute api
 work (e.g. YARN-759 showed up last night for consideration), bug-fixes
 etc.; then run this through a battery of unit/system/integration tests and
 do a final review before we ship. There is more work remaining on
 documentation (e.g. HADOOP-9517) and I plan to personally focus on it this
 week - obviously help reviewing docs is very welcome.
 
  Committers, from now, please please exercise your judgement on where you
 commit. Typically, features should go into branch-2 with 2.3.0 as the
 version on jira (fix-version 2.3.0 is ready). The expectation is that 2.2.0
 will be limited to content in branch-2.1-beta and we stick to stabilizing
 it henceforth (I've deliberately not created 2.2.0 fix-version on jira yet).
 
  thanks,
  Arun

 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/





-- 
Alejandro


[jira] [Reopened] (HADOOP-9646) Inconsistent exception specifications in FileUtils#chmod

2013-06-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-9646:
--


 Inconsistent exception specifications in FileUtils#chmod
 

 Key: HADOOP-9646
 URL: https://issues.apache.org/jira/browse/HADOOP-9646
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HADOOP-9646.001.patch, HADOOP-9646.002.patch


 There are two FileUtils#chmod methods:
 {code}
 public static int chmod(String filename, String perm
   ) throws IOException, InterruptedException;
 public static int chmod(String filename, String perm, boolean recursive)
 throws IOException;
 {code}
 The first one just calls the second one with {{recursive = false}}, but 
 despite that it is declared as throwing {{InterruptedException}}, something 
 the second one doesn't declare.
 The new Java7 chmod API, which we will transition to once JDK6 support is 
 dropped, does *not* throw {{InterruptedException}}
 See 
 [http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setOwner(java.nio.file.Path,
  java.nio.file.attribute.UserPrincipal)]
 So we should make these consistent by removing the {{InterruptedException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Heads up: branch-2.1-beta

2013-06-14 Thread Roman Shaposhnik
On Thu, Jun 6, 2013 at 4:48 AM, Arun C Murthy a...@hortonworks.com wrote:

 On Jun 5, 2013, at 11:04 AM, Roman Shaposhnik wrote

 On the Bigtop side of things, once we have stable Bigtop 0.6.0 platform
 based on Hadoop 2.0.x codeline we plan to start running the same battery
 of integration tests on the branch-2.1-beta.

 We plan to simply file JIRAs if anything gets detected and I will also
 publish the URL of the Jenkins job once it gets created.

 Thanks Roman. Is there an ETA for this? Also, please file jiras with Blocker 
 priority to catch attention.

The build is up and running (and all green on all of the 9 Linux platforms!):
http://bigtop01.cloudera.org:8080/job/Hadoop-2.1.0/

The immediate benefit here is that we get to see that the
build is ok on all these Linuxes and all anybody can easily
install packaged Hadoop 2.1.0 nightly builds.

Starting from next week, I'll start running regular tests
on these bits and will keep you guys posted!

Thanks,
Roman.