Jenkins build is back to normal : Hadoop-Common-trunk #1518

2015-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1518/changes



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #220

2015-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-common-trunk-Java8/220/changes



[jira] [Created] (HADOOP-12071) conftest is not documented

2015-06-06 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12071:
---

 Summary: conftest is not documented
 Key: HADOOP-12071
 URL: https://issues.apache.org/jira/browse/HADOOP-12071
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Kengo Seki
Assignee: Kengo Seki


HADOOP-7947 introduced new hadoop subcommand conftest, but it is not documented 
yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12070) Some of the bin/hadoop subcommands are not available on Windows

2015-06-06 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12070:
---

 Summary: Some of the bin/hadoop subcommands are not available on 
Windows
 Key: HADOOP-12070
 URL: https://issues.apache.org/jira/browse/HADOOP-12070
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Kengo Seki
Assignee: Kengo Seki


* conftest, distch, jnipath and trace are not enabled in hadoop.cmd
* kerbname is enabled, but does not appear in the help message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Reminder: Apache committers have access to a free MSDN license

2015-06-06 Thread Chris Nauroth
If you are a committer on any Apache project (not just Hadoop), then you
have access to a free MSDN license.  The details are described here.

https://svn.apache.org/repos/private/committers/donated-licenses/msdn-licen
se-grants.txt


You'll need to authenticate with your Apache credentials.

This means that all Hadoop committers, and a large number of contributors
who are also committers on other Apache projects, are empowered to review
and test patches on Windows.

After getting the free MSDN license, you can download the installation iso
for Windows Server 2008 or 2010 and run it in a VirtualBox VM (or your
hypervisor of choice). Instructions for setting up a Windows development
environment have been in BUILDING.txt for a few years.

This would prevent situations where patches are blocked from getting
committed while waiting for me or any other individual to test.

--Chris Nauroth



Re: upstream jenkins build broken?

2015-06-06 Thread Sean Busbey
Hi Folks!

After working on test-patch with other folks for the last few months, I
think we've reached the point where we can make the fastest progress
towards the goal of a general use pre-commit patch tester by spinning
things into a project focused on just that. I think we have a mature enough
code base and a sufficient fledgling community, so I'm going to put
together a tlp proposal.

Thanks for the feedback thus far from use within Hadoop. I hope we can
continue to make things more useful.

-Sean

On Wed, Mar 11, 2015 at 5:16 PM, Sean Busbey bus...@cloudera.com wrote:

 HBase's dev-support folder is where the scripts and support files live.
 We've only recently started adding anything to the maven builds that's
 specific to jenkins[1]; so far it's diagnostic stuff, but that's where I'd
 add in more if we ran into the same permissions problems y'all are having.

 There's also our precommit job itself, though it isn't large[2]. AFAIK, we
 don't properly back this up anywhere, we just notify each other of changes
 on a particular mail thread[3].

 [1]: https://github.com/apache/hbase/blob/master/pom.xml#L1687
 [2]: https://builds.apache.org/job/PreCommit-HBASE-Build/ (they're all
 read because I just finished fixing mvn site running out of permgen)
 [3]: http://s.apache.org/NT0


 On Wed, Mar 11, 2015 at 4:51 PM, Chris Nauroth cnaur...@hortonworks.com
 wrote:

 Sure, thanks Sean!  Do we just look in the dev-support folder in the HBase
 repo?  Is there any additional context we need to be aware of?

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/






 On 3/11/15, 2:44 PM, Sean Busbey bus...@cloudera.com wrote:

 +dev@hbase
 
 HBase has recently been cleaning up our precommit jenkins jobs to make
 them
 more robust. From what I can tell our stuff started off as an earlier
 version of what Hadoop uses for testing.
 
 Folks on either side open to an experiment of combining our precommit
 check
 tooling? In principle we should be looking for the same kinds of things.
 
 Naturally we'll still need different jenkins jobs to handle different
 resource needs and we'd need to figure out where stuff eventually lives,
 but that could come later.
 
 On Wed, Mar 11, 2015 at 4:34 PM, Chris Nauroth cnaur...@hortonworks.com
 
 wrote:
 
  The only thing I'm aware of is the failOnError option:
 
 
 
 http://maven.apache.org/plugins/maven-clean-plugin/examples/ignoring-erro
 rs
  .html
 
 
  I prefer that we don't disable this, because ignoring different kinds
 of
  failures could leave our build directories in an indeterminate state.
 For
  example, we could end up with an old class file on the classpath for
 test
  runs that was supposedly deleted.
 
  I think it's worth exploring Eddy's suggestion to try simulating
 failure
  by placing a file where the code expects to see a directory.  That
 might
  even let us enable some of these tests that are skipped on Windows,
  because Windows allows access for the owner even after permissions have
  been stripped.
 
  Chris Nauroth
  Hortonworks
  http://hortonworks.com/
 
 
 
 
 
 
  On 3/11/15, 2:10 PM, Colin McCabe cmcc...@alumni.cmu.edu wrote:
 
  Is there a maven plugin or setting we can use to simply remove
  directories that have no executable permissions on them?  Clearly we
  have the permission to do this from a technical point of view (since
  we created the directories as the jenkins user), it's simply that the
  code refuses to do it.
  
  Otherwise I guess we can just fix those tests...
  
  Colin
  
  On Tue, Mar 10, 2015 at 2:43 PM, Lei Xu l...@cloudera.com wrote:
   Thanks a lot for looking into HDFS-7722, Chris.
  
   In HDFS-7722:
   TestDataNodeVolumeFailureXXX tests reset data dir permissions in
  TearDown().
   TestDataNodeHotSwapVolumes reset permissions in a finally clause.
  
   Also I ran mvn test several times on my machine and all tests
 passed.
  
   However, since in DiskChecker#checkDirAccess():
  
   private static void checkDirAccess(File dir) throws
 DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
  
 checkAccessByFileMethods(dir);
   }
  
   One potentially safer alternative is replacing data dir with a
 regular
   file to stimulate disk failures.
  
   On Tue, Mar 10, 2015 at 2:19 PM, Chris Nauroth
  cnaur...@hortonworks.com wrote:
   TestDataNodeHotSwapVolumes, TestDataNodeVolumeFailure,
   TestDataNodeVolumeFailureReporting, and
   TestDataNodeVolumeFailureToleration all remove executable
 permissions
  from
   directories like the one Colin mentioned to simulate disk failures
 at
  data
   nodes.  I reviewed the code for all of those, and they all appear
 to be
   doing the necessary work to restore executable permissions at the
 end
  of
   the test.  The only recent uncommitted patch I¹ve seen that makes
  changes
   in these test suites is HDFS-7722.  That patch still looks fine
  though.  I
   don¹t know if 

[DISCUSS] project for pre-commit patch testing (was Re: upstream jenkins build broken?)

2015-06-06 Thread Sean Busbey
Sorry for the resend. I figured this deserves a [DISCUSS] flag.



On Sat, Jun 6, 2015 at 10:39 PM, Sean Busbey bus...@cloudera.com wrote:

 Hi Folks!

 After working on test-patch with other folks for the last few months, I
 think we've reached the point where we can make the fastest progress
 towards the goal of a general use pre-commit patch tester by spinning
 things into a project focused on just that. I think we have a mature enough
 code base and a sufficient fledgling community, so I'm going to put
 together a tlp proposal.

 Thanks for the feedback thus far from use within Hadoop. I hope we can
 continue to make things more useful.

 -Sean

 On Wed, Mar 11, 2015 at 5:16 PM, Sean Busbey bus...@cloudera.com wrote:

 HBase's dev-support folder is where the scripts and support files live.
 We've only recently started adding anything to the maven builds that's
 specific to jenkins[1]; so far it's diagnostic stuff, but that's where I'd
 add in more if we ran into the same permissions problems y'all are having.

 There's also our precommit job itself, though it isn't large[2]. AFAIK,
 we don't properly back this up anywhere, we just notify each other of
 changes on a particular mail thread[3].

 [1]: https://github.com/apache/hbase/blob/master/pom.xml#L1687
 [2]: https://builds.apache.org/job/PreCommit-HBASE-Build/ (they're all
 read because I just finished fixing mvn site running out of permgen)
 [3]: http://s.apache.org/NT0


 On Wed, Mar 11, 2015 at 4:51 PM, Chris Nauroth cnaur...@hortonworks.com
 wrote:

 Sure, thanks Sean!  Do we just look in the dev-support folder in the
 HBase
 repo?  Is there any additional context we need to be aware of?

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/






 On 3/11/15, 2:44 PM, Sean Busbey bus...@cloudera.com wrote:

 +dev@hbase
 
 HBase has recently been cleaning up our precommit jenkins jobs to make
 them
 more robust. From what I can tell our stuff started off as an earlier
 version of what Hadoop uses for testing.
 
 Folks on either side open to an experiment of combining our precommit
 check
 tooling? In principle we should be looking for the same kinds of things.
 
 Naturally we'll still need different jenkins jobs to handle different
 resource needs and we'd need to figure out where stuff eventually lives,
 but that could come later.
 
 On Wed, Mar 11, 2015 at 4:34 PM, Chris Nauroth 
 cnaur...@hortonworks.com
 wrote:
 
  The only thing I'm aware of is the failOnError option:
 
 
 
 http://maven.apache.org/plugins/maven-clean-plugin/examples/ignoring-erro
 rs
  .html
 
 
  I prefer that we don't disable this, because ignoring different kinds
 of
  failures could leave our build directories in an indeterminate state.
 For
  example, we could end up with an old class file on the classpath for
 test
  runs that was supposedly deleted.
 
  I think it's worth exploring Eddy's suggestion to try simulating
 failure
  by placing a file where the code expects to see a directory.  That
 might
  even let us enable some of these tests that are skipped on Windows,
  because Windows allows access for the owner even after permissions
 have
  been stripped.
 
  Chris Nauroth
  Hortonworks
  http://hortonworks.com/
 
 
 
 
 
 
  On 3/11/15, 2:10 PM, Colin McCabe cmcc...@alumni.cmu.edu wrote:
 
  Is there a maven plugin or setting we can use to simply remove
  directories that have no executable permissions on them?  Clearly we
  have the permission to do this from a technical point of view (since
  we created the directories as the jenkins user), it's simply that the
  code refuses to do it.
  
  Otherwise I guess we can just fix those tests...
  
  Colin
  
  On Tue, Mar 10, 2015 at 2:43 PM, Lei Xu l...@cloudera.com wrote:
   Thanks a lot for looking into HDFS-7722, Chris.
  
   In HDFS-7722:
   TestDataNodeVolumeFailureXXX tests reset data dir permissions in
  TearDown().
   TestDataNodeHotSwapVolumes reset permissions in a finally clause.
  
   Also I ran mvn test several times on my machine and all tests
 passed.
  
   However, since in DiskChecker#checkDirAccess():
  
   private static void checkDirAccess(File dir) throws
 DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
  
 checkAccessByFileMethods(dir);
   }
  
   One potentially safer alternative is replacing data dir with a
 regular
   file to stimulate disk failures.
  
   On Tue, Mar 10, 2015 at 2:19 PM, Chris Nauroth
  cnaur...@hortonworks.com wrote:
   TestDataNodeHotSwapVolumes, TestDataNodeVolumeFailure,
   TestDataNodeVolumeFailureReporting, and
   TestDataNodeVolumeFailureToleration all remove executable
 permissions
  from
   directories like the one Colin mentioned to simulate disk failures
 at
  data
   nodes.  I reviewed the code for all of those, and they all appear
 to be
   doing the necessary work to restore executable permissions at the
 end
  of
   the test.  The only 

[jira] [Created] (HADOOP-12072) conftest raises a false alarm over the fair scheduler configuration file

2015-06-06 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12072:
---

 Summary: conftest raises a false alarm over the fair scheduler 
configuration file
 Key: HADOOP-12072
 URL: https://issues.apache.org/jira/browse/HADOOP-12072
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kengo Seki


hadoop conftest subcommand validates the XML files in ${HADOOP_CONF_DIR} by 
default, and assumes the root element of the XML is configuration.
But it is popular to put the fair scheduler configuration file as 
${HADOOP_CONF_DIR}/fair-scheduler.xml, and its root element is allocations, 
so conftest raises a false alarm.

{code}
[sekikn@mobile hadoop-3.0.0-SNAPSHOT]$ bin/hadoop conftest
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml:
 valid
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/core-site.xml:
 valid
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/fair-scheduler.xml:
bad conf file: top-level element not configuration
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/hadoop-policy.xml:
 valid
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/hdfs-site.xml:
 valid
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/httpfs-site.xml:
 valid
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/kms-acls.xml:
 valid
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/kms-site.xml:
 valid
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/mapred-site.xml:
 valid
/Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/yarn-site.xml:
 valid
Invalid file exists
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)