Build failed in Jenkins: Hadoop-common-trunk-Java8 #625

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[ozawa] Move YARN-3580 in CHANGES.txt from 2.8 to 2.7.2.

[szetszwo] HDFS-9323. Randomize the DFSStripedOutputStreamWithFailure tests.

--
[...truncated 3907 lines...]
---

---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.157 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.805 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-minikdc ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc ---
[INFO] 
Loading source files for package org.apache.hadoop.minikdc...
Constructing Javadoc information...
Standard Doclet version 1.8.0
Building tree for all the packages and classes...
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

2 warnings
[WARNING] Javadoc Warnings
[WARNING] 

Re: Github integration for Hadoop

2015-10-30 Thread Akira AJISAKA

+1 if the current review process is kept.

Pros:

* Using PRs is good for increasing developers because PRs are very 
familiar for many developers.

* Better UI. (no need to copy the diff manually for reviewing patches!)

Cons:

* Discussions can be split. However, according to the below blog post, 
any comments and other things on PRs can be recorded on the project's 
mailing list or mirrored to the JIRA ticket, so we can search discussion 
by searching MLs or JIRAs. Therefore, IMO, it's not a big problem.


https://blogs.apache.org/infra/entry/improved_integration_between_apache_and

* github.com is out of Apache infra, so the service can be stopped for 
some reason (such as, acquired by some company or system down). I'm 
thinking we need to keep the current review process if we have switched 
to use GitHub PRs.


Regards,
Akira

On 10/30/15 16:38, Alexander Pivovarov wrote:

Andrew, look at Spark github. They use PRs and I do not see extra merge
commits. Committer can do fast-farward merge using git command line.
PRs are used to leave inline feedbacks for the fix
On Oct 29, 2015 1:34 PM, "Andrew Wang"  wrote:


Has anything changed regarding the github integration since the last time
we discussed this? That blog post is from 2014, and we discussed
alternative review systems earlier in 2015.

Colin specifically was concerned about forking the discussion between JIRA
and other places:

http://search-hadoop.com/m/uOzYtkYxo4qazi=Re+Patch+review+process
http://search-hadoop.com/m/uOzYtSz7z624qazi=Re+Patch+review+process

There are also questions about PRs leading to messy commit history with the
extra merge commits. Spark IIRC has something to linearize it again, which
seems important if we actually want to do this.

Could someone outline the upsides of using github? I don't find the review
UI particularly great compared to Gerrit or even RB, and there's the merge
commit issue. For instance, do we think using Github would lead to more
contributions? Improved developer workflows? Have we re-examined
alternatives like Gerrit or RB as well?

On Thu, Oct 29, 2015 at 12:25 PM, Arpit Agarwal 
wrote:


+1, thanks for proposing it.





On 10/29/15, 10:47 AM, "Owen O'Malley"  wrote:


All,
   For code & patch review, many of the newer projects are using the

Github

pull request integration. You can read about it here:





https://blogs.apache.org/infra/entry/improved_integration_between_apache_and


It basically lets you:
* have mirroring between comments on pull requests and jira
* lets you close pull requests
* have mirroring between pull request comments and the Apache mail lists

Thoughts?
.. Owen










Build failed in Jenkins: Hadoop-common-trunk-Java8 #631

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[kihwal] MAPREDUCE-6451. DistCp has incorrect chunkFilePath for multiple jobs

[kihwal] Addendum to MAPREDUCE-6451

--
[...truncated 5887 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.473 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.318 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.281 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.677 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.05 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.95 sec - in 
org.apache.hadoop.io.file.tfile.TestTFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.999 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.659 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.097 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSplit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.552 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSplit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.853 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileComparator2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.025 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.762 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.221 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestVLong
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.31 sec - in 
org.apache.hadoop.io.file.tfile.TestVLong
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.152 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: 

Jenkins build is back to normal : Hadoop-Common-trunk #1931

2015-10-30 Thread Apache Jenkins Server
See 



Re: Github integration for Hadoop

2015-10-30 Thread Andrew Wang
On Fri, Oct 30, 2015 at 1:59 PM, Colin P. McCabe  wrote:

> I am -1 on the current GH stuff, just because I feel like there wasn't
> enough discussion, testing, and documentation of it.  The initial
> proposal had no details and was implemented before any of the people
> who had had misgivings on the previous email thread had a chance to
> even see it.
>
> It's a big change to totally change our commit workflow.  Much bigger
> than something like merging a feature branch or making a release, both
> of which require votes.  This kind of change should require an actual
> proposal and a VOTE thread.
>
> So this was only clarified this morning, but we'd be using GH only for
code review, not for code integration. So committers still need to download
the PR, squash it, update CHANGES, push.

Agree if we go beyond this, it 100% needs to be in a VOTE. Even this
probably should have been in a VOTE with a proposal spelling out all these
details.

I would be +0 on GH if there was a concrete proposal addressing
> * How this going to interact with JIRA.  For example, what is the new
> process once a bug has been identified?  Is a JIRA required?  How do
> we link JIRA and GH?  Do we reject GH requests without a JIRA?
> * What will we update the HowToContribute page to say?
> * How will we deal with the merge commit problem?  Will we have a tool
> to remove them?  Can GH be configured to avoid them?  Will we update
> tools on top (including in-house tools) to handle merge commits?
> * Will we have tooling to search GH pull requests?  How do we prevent
> patches from new committers from falling through the cracks if they
> don't know about JIRA?
>
>
Strongly agree that we need documentation for all this. Some of my open
questions are also still pending. We need to test precommit integration too.

Owen, are you chasing all of this down? Do we need some JIRAs to track?


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #626

2015-10-30 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #632

2015-10-30 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Common-trunk #1932

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[sjlee] Updated the 2.6.2 final release date.

--
[...truncated 5398 lines...]
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.969 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractAppend
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.97 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.659 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.722 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.7 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.829 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.74 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Running org.apache.hadoop.fs.TestPath
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.204 sec - in 
org.apache.hadoop.fs.TestPath
Running org.apache.hadoop.fs.TestLocalFileSystem
Tests run: 20, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 1.949 sec - in 
org.apache.hadoop.fs.TestLocalFileSystem
Running org.apache.hadoop.fs.permission.TestAcl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in 
org.apache.hadoop.fs.permission.TestAcl
Running org.apache.hadoop.fs.permission.TestFsPermission
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.695 sec - in 
org.apache.hadoop.fs.permission.TestFsPermission
Running org.apache.hadoop.fs.TestFilterFileSystem
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.076 sec - in 
org.apache.hadoop.fs.TestFilterFileSystem
Running org.apache.hadoop.fs.TestStat
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.869 sec - in 
org.apache.hadoop.fs.TestStat
Running org.apache.hadoop.fs.TestFsOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in 
org.apache.hadoop.fs.TestFsOptions
Running org.apache.hadoop.fs.TestGlobExpander
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.063 sec - in 
org.apache.hadoop.fs.TestGlobExpander
Running org.apache.hadoop.fs.TestHarFileSystemBasics
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.44 sec - in 
org.apache.hadoop.fs.TestHarFileSystemBasics
Running org.apache.hadoop.fs.TestFileContextResolveAfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.852 sec - in 
org.apache.hadoop.fs.TestFileContextResolveAfs
Running org.apache.hadoop.fs.TestFileSystemCanonicalization
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 sec - in 
org.apache.hadoop.fs.TestFileSystemCanonicalization
Running org.apache.hadoop.fs.TestFsShellCopy
Tests run: 16, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 1.578 sec - in 
org.apache.hadoop.fs.TestFsShellCopy
Running org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
Tests run: 63, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 2.774 sec - in 
org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
Running org.apache.hadoop.fs.TestFileContext
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.394 sec - in 
org.apache.hadoop.fs.TestFileContext
Running org.apache.hadoop.fs.TestDelegationTokenRenewer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.745 sec - in 
org.apache.hadoop.fs.TestDelegationTokenRenewer
Running org.apache.hadoop.fs.TestFileSystemCaching
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.278 sec - in 
org.apache.hadoop.fs.TestFileSystemCaching
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.813 sec - in 
org.apache.hadoop.fs.TestFcLocalFsUtil
Running org.apache.hadoop.fs.TestLocalFsFCStatistics
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.972 sec - in 
org.apache.hadoop.fs.TestLocalFsFCStatistics
Running org.apache.hadoop.fs.TestTruncatedInputBug
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.789 sec - in 
org.apache.hadoop.fs.TestTruncatedInputBug
Running org.apache.hadoop.fs.TestDU
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.507 

[jira] [Resolved] (HADOOP-12530) Error when trying to format HDFS by running hdfs namenode -format

2015-10-30 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton resolved HADOOP-12530.
---
Resolution: Not A Problem
  Assignee: Daniel Templeton

This is a configuration issue, not a Hadoop issue.

(The issue is that you're missing the '/' in the closing configuration tag.)

> Error when trying to format HDFS by running hdfs namenode -format
> -
>
> Key: HADOOP-12530
> URL: https://issues.apache.org/jira/browse/HADOOP-12530
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
> Environment: mac
>Reporter: Nadia
>Assignee: Daniel Templeton
>
> I receive following errors when running $ hdfs namenode -format
> 15/10/30 12:34:18 INFO namenode.NameNode: registered UNIX signal handlers for 
> [TERM, HUP, INT]
> 15/10/30 12:34:18 INFO namenode.NameNode: createNameNode [-format]
> [Fatal Error] core-site.xml:31:1: XML document structures must start and end 
> within the same entity.
> 15/10/30 12:34:19 FATAL conf.Configuration: error parsing conf core-site.xml
> org.xml.sax.SAXParseException; systemId: 
> file:/usr/local/Cellar/hadoop/2.7.1/libexec/etc/hadoop/core-site.xml; 
> lineNumber: 31; columnNumber: 1; XML document structures must start and end 
> within the same entity.
> core-site.xml looks like this: 
>  
> 
> 
> 
> 
> 
>  hadoop.tmp.dir
>  /usr/local/Cellar/hadoop/hdfs/tmp
>  A base for other temporary directories.
>   
>   
>  fs.default.name 
>  hdfs://localhost:9000 
>   
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12531) Regroup of all Hadoop Ecosystem UIs

2015-10-30 Thread SERGENT David (JIRA)
SERGENT David created HADOOP-12531:
--

 Summary: Regroup of all Hadoop Ecosystem UIs
 Key: HADOOP-12531
 URL: https://issues.apache.org/jira/browse/HADOOP-12531
 Project: Hadoop Common
  Issue Type: Improvement
  Components: site
 Environment: All
Reporter: SERGENT David
Priority: Minor


Today , all the products of the Hadoop ecosystem (HDFS, YARN, HBASE ...) have 
independent GUIs , each on different ports.
In a cloud environment, it is not possible to expose all these ports. In this 
environment we often use SOCKS proxy in order to access the UIs.
It would be a good idea to introduce a best practice in order to regroup all 
the UIs on a same port like a plugin.
This way, we could easily access all the UIs and not use URL rewriting for 
example in order to tinker a common access to all the UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Java 8 + Jersey updates

2015-10-30 Thread Tsuyoshi Ozawa
On Fri, Oct 30, 2015 at 11:16 PM, Steve Loughran  wrote:
>
>> On 29 Oct 2015, at 15:40, Tsuyoshi Ozawa  wrote:
>>
>> Steve,
>>
>>> If you exclude jax-rs 2 and try to stay @ jersey 1.9 for your client, all 
>>> the http clients: KMS, webhdfs, ATS, aren't going to link.
>>
>> I thought we can use Jersey 1.19 on JDK 8 with client-side
>> compatibility, but do you mean that we cannot use Jersey 1.19 on JDK
>> 8? Please correct me if I've misunderstood.
>>
>
> jersey 1.19 works on JDK8. What I'm worried about is having two versions of 
> jersey on the classpath -as the patch you've put up rewrites all the 
> client-side code to move to jersey 2
>

A latest patch I updated(HADOOP-9613.007.incompatible.patch) uses
Jersey 1.19. All rewrites at client-side code I updated is to avoid
using deprecated method, ClientResponse.getClientResponseStatus(). In
fact, we can still use the method though it's marked as deprecated
since 1.18.

https://jersey.java.net/nonav/apidocs/1.19/jersey/com/sun/jersey/api/client/ClientResponse.html
https://jersey.java.net/nonav/apidocs/1.19/jersey/com/sun/jersey/api/client/ClientResponse.html#getClientResponseStatus()

Maybe the latest patch confuses you with non-essential change. I'm
sorry for that. Please correct me if I'm wrong.

Best,
- Tsuyoshi


Re: building branch-2 on windows

2015-10-30 Thread Steve Loughran

On 28 Oct 2015, at 21:28, Chris Nauroth 
> wrote:

I just confirmed that I can build the release-2.6.2-RC0 tag from source on
Windows, including the native components.

For zlib, my setup is to have the source extracted in C:\zlib-1.2.7, and
that's where I point ZLIB_HOME.  The headers are in that directory.  I
have the built zlib1.dll in C:\zlib-1.2.7-bin\x64fre.  I include that
directory in my PATH at runtime.  This is only a requirement for the
runtime dynamic linking though, not a build requirement.  Running "hadoop
checknative" can confirm if the dynamic linking is working correctly.

Steve's output shows that it's trying to call CMake.  hadoop.dll and
winutils.exe are not built using CMake, so this makes me think that you
called the build with the -Pnative profile.  This profile is
*nix-specific.  For Windows, use -Pnative-win, or simply omit the
argument, because native-win is on by default in the pom.xml when Maven
detects the OS is Windows.

The other ZLIB_* environment variables you mentioned are not necessary,
unless these are somehow used indirectly in your own custom dev setup.  I
only set ZLIB_HOME.

--Chris Nauroth




Got it, -Pnative-win was the key

I have the 2.6.2-RC0 release building, hadoop-common tests are faiilng

Results :

Failed tests:
  TestUTF8.testGetBytes:58 expected: but 
was:
  TestUTF8.testIO:86 
expected:<...[?]...> but 
was:<...[?]???
?...>
  TestDecayRpcScheduler.testAccumulate:136 expected:<3> but was:<2>
  TestDecayRpcScheduler.testPriority:203 expected:<2> but was:<1>
  TestSaslRPC.testKerberosServer:812->assertAuthEquals:978 
expected:<.*RemoteException.*AccessControlException.*: SIMPLE
 authentication is not enabled.*> but was:
  TestProxyUserFromEnv.testProxyUserFromEnvironment:54 
expected:<[a]dministrator> but was:<[A]dministrator>

Tests in error:
  TestHttpCookieFlag.setUp:91 » Certificate Subject class type invalid.
  TestHttpCookieFlag.cleanup:147 NullPointer
  TestSSLHttpServer.setup:67 » Certificate Subject class type invalid.
  TestSSLHttpServer.cleanup:96 NullPointer
  TestSequenceFileAppend.testAppendBlockCompression:194->verify2Values:295 » IO 
...
  TestSequenceFileAppend.testAppendSort:286 » IO not a gzip file
  TestSequenceFileAppend.testAppendRecordCompression:160->verify2Values:296 » IO
  TestReloadingX509TrustManager.testReloadCorruptTrustStore:147 » Certificate 
Su...
  TestReloadingX509TrustManager.testReload:86 » Certificate Subject class type 
i...
  TestReloadingX509TrustManager.testReloadMissingTrustStore:121 » Certificate 
Su...
  TestSSLFactory.testNoClientCertsInitialization:337->createConfiguration:64 » 
Certificate
  
TestSSLFactory.testServerKeyPasswordDefaultsToPassword:205->checkSSLFactoryInitWithPasswords:248->checkSSLFactoryInitW
ithPasswords:283 » Certificate
  
TestSSLFactory.testServerCredProviderPasswords:224->checkSSLFactoryInitWithPasswords:283
 » Certificate
  
TestSSLFactory.testClientDifferentPasswordAndKeyPassword:211->checkSSLFactoryInitWithPasswords:248->checkSSLFactoryIni
tWithPasswords:283 » Certificate
  TestSSLFactory.serverModeWithoutClientCertsSocket »  Unexpected exception, 
exp...
  TestSSLFactory.serverModeWithClientCertsSocket »  Unexpected exception, 
expect...
  TestSSLFactory.testConnectionConfigurator:180->createConfiguration:64 » 
Certificate
  
TestSSLFactory.testServerDifferentPasswordAndKeyPassword:199->checkSSLFactoryInitWithPasswords:248->checkSSLFactoryIni
tWithPasswords:283 » Certificate
  TestSSLFactory.serverModeWithClientCertsVerifier »  Unexpected exception, 
expe...
  
TestSSLFactory.testClientKeyPasswordDefaultsToPassword:217->checkSSLFactoryInitWithPasswords:248->checkSSLFactoryInitW
ithPasswords:283 » Certificate
  TestSSLFactory.validHostnameVerifier:130->createConfiguration:64 » Certificate
  TestSSLFactory.clientMode »  Unexpected exception, 
expectedcreateConfiguration:64 » Certificate 
Subj...
  TestSSLFactory.serverModeWithoutClientCertsVerifier »  Unexpected exception, 
e...
  TestSecurityUtil.isOriginalTGTReturnsCorrectValues:57 » IllegalArgument Empty 
...

Tests run: 2752, Failures: 6, Errors: 25, Skipped: 140

looking at these, they generally come in as OpenSSL/Kerberos stuff, which could 
be pointed down to the openssl and JDK versions -though the 2.6.2 release is 
meant to have the JDK8 kerberos patch, and some possible hostname/username 
discrepancy bugs.


Re: Github integration for Hadoop

2015-10-30 Thread Colin P. McCabe
I am -1 on the current GH stuff, just because I feel like there wasn't
enough discussion, testing, and documentation of it.  The initial
proposal had no details and was implemented before any of the people
who had had misgivings on the previous email thread had a chance to
even see it.

It's a big change to totally change our commit workflow.  Much bigger
than something like merging a feature branch or making a release, both
of which require votes.  This kind of change should require an actual
proposal and a VOTE thread.

I would be +0 on GH if there was a concrete proposal addressing
* How this going to interact with JIRA.  For example, what is the new
process once a bug has been identified?  Is a JIRA required?  How do
we link JIRA and GH?  Do we reject GH requests without a JIRA?
* What will we update the HowToContribute page to say?
* How will we deal with the merge commit problem?  Will we have a tool
to remove them?  Can GH be configured to avoid them?  Will we update
tools on top (including in-house tools) to handle merge commits?
* Will we have tooling to search GH pull requests?  How do we prevent
patches from new committers from falling through the cracks if they
don't know about JIRA?

Colin

On Fri, Oct 30, 2015 at 11:57 AM, Sean Busbey  wrote:
> On Fri, Oct 30, 2015 at 1:22 PM, Allen Wittenauer  wrote:
>>
>>> * Have we tried our precommit on PRs yet? Does it work for multiple
>>> branches? Is there a way to enforce rebase+squash vs. merge on the PR,
>>> since, per Allen, Yetus requires one commit to work?
>>
>>
>> I don’t know about the Jenkins-side of things (e.g., how does 
>> Jenkins trigger a build?).  As far as Yetus is concerned, here’s the 
>> functionality that has been written:
>>
>> * Pull patches from Github by PR #
>> * Comment on patches in Github, given credentials
>> * Comment on specific lines in Github, given credentials
>> * Test patches against the branch/repo that the pull request is 
>> against
>> * GH<->JIRA intelligence such that if a PR mentions an issue as the 
>> leading text in the subject line or an issue mentions a PR in the comments, 
>> pull from github and put a comment in both places (again, given credentials)
>
>
> Jenkins builds are all driven off of the "Precommit Admin" job that
> does a JQL search for enabled projects with open patches:
>
> https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-Admin/
>
> I believe with the current integration, that means we'll find an test
> any github PRs that are mentioned in a jira ticket that is in
> PATCH_AVAILABLE status.
>
> At some point we'll need an exemplar jenkins trigger job that just
> activates on open PRs, at least for ASF projects. But no such job
> exists now.
>
>
> --
> Sean


[jira] [Created] (HADOOP-12535) Run FileSystem contract tests with hadoop-azure.

2015-10-30 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-12535:
--

 Summary: Run FileSystem contract tests with hadoop-azure.
 Key: HADOOP-12535
 URL: https://issues.apache.org/jira/browse/HADOOP-12535
 Project: Hadoop Common
  Issue Type: Bug
  Components: azure, test
Reporter: Chris Nauroth
Assignee: Duo Xu


This issue proposes to implement the Hadoop {{FileSystem}} contract tests for 
hadoop-azure/WASB.  The contract tests define the expected semantics of the 
{{FileSystem}}, so running these for hadoop-azure is likely to catch potential 
problems and improve overall quality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Github integration for Hadoop

2015-10-30 Thread Owen O'Malley
It seems like there is overwhelming support for enabling the github
integration, so I went ahead and filed the infra ticket
.

This is explicitly not changing the way that we commit on Hadoop and
commits should be squashed and rebased rather than merged on to the master
branch. If you want to close a pull request with a commit, just add a line
at the end of the commit message that says:

closes apache/hadoop#123

If someone else wants to setup gerrit, we can evaluate it. However, I am
skeptical that it would be so much better than Github that it would be
worth making people learn a new tool.

Thanks,
   Owen


Jenkins build is back to normal : Hadoop-Common-trunk #1929

2015-10-30 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-12534) User document for SFTP File System

2015-10-30 Thread ramtin (JIRA)
ramtin created HADOOP-12534:
---

 Summary: User document for SFTP File System
 Key: HADOOP-12534
 URL: https://issues.apache.org/jira/browse/HADOOP-12534
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: ramtin
Assignee: ramtin


We should have a document for using SFTP File System.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #1930

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[aw] HADOOP-12133 Add schemas to Maven Assembly XMLs

--
[...truncated 8546 lines...]
  [javadoc] Loading source files for package org.apache.hadoop.log.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.metrics...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics.ganglia...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.jvm...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.spi...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.util...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.annotation...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.filter...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.impl...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.lib...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.sink...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.sink.ganglia...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.source...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.util...
  [javadoc] Loading source files for package org.apache.hadoop.net...
  [javadoc] Loading source files for package org.apache.hadoop.net.unix...
  [javadoc] Loading source files for package org.apache.hadoop.security...
  [javadoc] Loading source files for package org.apache.hadoop.security.alias...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.authorize...
  [javadoc] Loading source files for package org.apache.hadoop.security.http...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.protocolPB...
  [javadoc] Loading source files for package org.apache.hadoop.security.ssl...
  [javadoc] Loading source files for package org.apache.hadoop.security.token...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation.web...
  [javadoc] Loading source files for package org.apache.hadoop.service...
  [javadoc] Loading source files for package org.apache.hadoop.tools...
  [javadoc] Loading source files for package 
org.apache.hadoop.tools.protocolPB...
  [javadoc] Loading source files for package org.apache.hadoop.tracing...
  [javadoc] Loading source files for package org.apache.hadoop.util...
  [javadoc] Loading source files for package org.apache.hadoop.util.bloom...
  [javadoc] Loading source files for package org.apache.hadoop.util.curator...
  [javadoc] Loading source files for package org.apache.hadoop.util.hash...
  [javadoc] Constructing Javadoc information...
  [javadoc] 
:25:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Unsafe;
  [javadoc]^
  [javadoc] 
:46:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Unsafe;
  [javadoc]^
  [javadoc] 
:54:
 warning: ResolverConfiguration is internal proprietary API and may be removed 
in a future release
  [javadoc] import sun.net.dns.ResolverConfiguration;
  [javadoc]   ^
  [javadoc] 
:55:
 warning: IPAddressUtil is internal proprietary API and may be removed in a 
future release
  [javadoc] import sun.net.util.IPAddressUtil;
  [javadoc]^
  [javadoc] 
:21:
 warning: Signal is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Signal;
  [javadoc]^
  [javadoc] 
:22:
 warning: SignalHandler is internal proprietary API and may be removed in a 
future release
  [javadoc] import sun.misc.SignalHandler;
  [javadoc]^
  [javadoc] 

Build failed in Jenkins: Hadoop-common-trunk-Java8 #628

2015-10-30 Thread Apache Jenkins Server
See 

Changes:

[jlowe] MAPREDUCE-6528. Memory leak for HistoryFileManager.getJobSummary().

--
[...truncated 6148 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.144 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.657 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.707 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.695 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.704 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.894 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 1.131 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.013 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.171 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.953 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.235 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.094 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.125 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.195 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.943 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSetTimes
Java HotSpot(TM) 

Re: Github integration for Hadoop

2015-10-30 Thread Andrew Wang
I'm trying to slow this discussion down so we can:

a) Determine the problem(s) we're trying to solve
b) See how the proposed solutions meet this problem

The flood of +1's were made before a full proposal of what "enabling github
integration" means, which has only been really specified in Owen's most
recent email.

Referring to a), the problems mentioned in this thread:

1) Better, easier review tool (Steve)
2) Better attribution of contributors with their GH id (Arpit)
3) The backlog of PRs against our unused GH project (Owen)

Applying "use GH PRs as a review tool" to the above problems:

1) I think it has advantages, but as I said earlier, Github PRs really are
not designed for our rebase+squash workflow, since AFAIK it doesn't support
interdiffs.
2) not solved since the proposal is GH only as a review tool, not for
integration
3) not solved since all issues still need to start as a JIRA, or else the
mirroring won't work.

I'd also ask the following unknown questions:

* Is there a way to *disable* using PRs for integration? i.e. disable the
ability to merge PRs?
* Have we tried our precommit on PRs yet? Does it work for multiple
branches? Is there a way to enforce rebase+squash vs. merge on the PR,
since, per Allen, Yetus requires one commit to work?

This thread is barely 24 hours old, and I don't see why we're trying to
move this fast on the issue. Let's discuss some alternatives (!) and settle
on the right solution. We also haven't even broached review alternatives
like RB, Crucible, etc which were in the running last time this topic came
up.

Best,
Andrew


On Fri, Oct 30, 2015 at 9:19 AM, Owen O'Malley  wrote:

> It seems like there is overwhelming support for enabling the github
> integration, so I went ahead and filed the infra ticket
>  >.
>
> This is explicitly not changing the way that we commit on Hadoop and
> commits should be squashed and rebased rather than merged on to the master
> branch. If you want to close a pull request with a commit, just add a line
> at the end of the commit message that says:
>
> closes apache/hadoop#123
>
> If someone else wants to setup gerrit, we can evaluate it. However, I am
> skeptical that it would be so much better than Github that it would be
> worth making people learn a new tool.
>
> Thanks,
>Owen
>


[jira] [Created] (HADOOP-12533) Introduce FileNotFoundException in WASB for read and seek API

2015-10-30 Thread Dushyanth (JIRA)
Dushyanth created HADOOP-12533:
--

 Summary: Introduce FileNotFoundException in WASB for read and seek 
API
 Key: HADOOP-12533
 URL: https://issues.apache.org/jira/browse/HADOOP-12533
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.8.0
Reporter: Dushyanth
 Fix For: 2.8.0


Currently WASB throws a IOException in read and seek API for both Block and 
Page blobs for scenarios where the back blobs do not exists. This creates 
problems for applications like HBase which expect a FileNotFoundException in 
these scenarios. 

The fix for the problem is to check if the exceptions from Azure storage is 
because for Blob not found and throw FileNotFound exception if that is the case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12532) Data race in IPC client Client.stop()

2015-10-30 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12532:


 Summary: Data race in IPC client Client.stop()
 Key: HADOOP-12532
 URL: https://issues.apache.org/jira/browse/HADOOP-12532
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


I found a data race in ipc.Client.stop()

ipc.Client maintains a hash map of connection threads. When stop() is called, 
it interrupts all connection threads; the threads are supposed to remove itself 
from the hash map as part of the clean up work; and stop() periodically checks 
to see if the hash map is empty and then returns.

The bug is, this checking operation is not synchronized, and the connection 
thread actually removes itself from the hash map before terminating 
connections. 

This bug causes regression for HDFS-4925. In fact, the fix in HDFS-4925 may not 
be correct, because it assumes when it returns from 
QuorumJournalManager.close(), IPC client connection threads are terminated. But 
the reality is the IPC code assumes connections are closed, not the thread 
(which in any case is buggy as well).

This is also likely related to the bug reported in HDFS-4925 
(TestQuorumJournalManager.testPurgeLogs intermittently Fails 
assertNoThreadsMatching)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] hadoop pull request: HADOOP-11919 Testing Github integration

2015-10-30 Thread omalley
GitHub user omalley opened a pull request:

https://github.com/apache/hadoop/pull/40

HADOOP-11919 Testing Github integration

This is a test.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omalley/hadoop hadoop-11919

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/40.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #40


commit 005b3def20bac6f46110556c6ecdaa31aba3e17d
Author: Owen O'Malley 
Date:   2015-10-30T17:04:30Z

HADOOP-11919. Empty commit to test github integration.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] hadoop pull request: HADOOP-11919 Testing Github integration

2015-10-30 Thread omalley
Github user omalley commented on the pull request:

https://github.com/apache/hadoop/pull/40#issuecomment-152589471
  
Testing comments on github pull requests.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [GitHub] hadoop pull request: HADOOP-11919 Testing Github integration

2015-10-30 Thread Owen O'Malley
Responding to a comment on github.

On Fri, Oct 30, 2015 at 10:11 AM, omalley  wrote:

> Github user omalley commented on the pull request:
>
> https://github.com/apache/hadoop/pull/40#issuecomment-152589471
>
> Testing comments on github pull requests.
>
>
> ---
> If your project is set up for it, you can reply to this email and have your
> reply appear on GitHub as well. If your project does not have this feature
> enabled and wishes so, or if the feature is enabled but not working, please
> contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
> with INFRA.
> ---
>


Re: Github integration for Hadoop

2015-10-30 Thread Colin P. McCabe
I think we should take more than 24 hours of discussion to make a big,
fundamental change to our code review process.

I used github while working on the Spark project, and frankly I didn't
like it.  I didn't like the way it split the discussion between JIRA
and github.  Also, often people would file a jira about something
where the was already a patch up on github for a different jira (or
sometimes no jira at all).  This led to a lot of frustration where
several people would post patches and only some of them would actually
get looked at.  github doesn't have the powerful searching or bug
tracking tools of JIRA.

Basically, imagine jira with no way to search for related issues, no
JQL, no "assignee", no "target version", "fix version", or
"component".  No "in progress" versus "open".  All you would have is a
"list of pull requests" and perhaps a username next to each item.
This system was very hard on newbies because their patches seldom got
looked at.  It is hard to find anything in that pile.

I think the Spark guys eventually built some kind of UI on top of
github to help them search through pull requests.  We would probably
also need something like this.

Spark uses github partially because it started as a github project, so
everyone was familiar with that.  I haven't seen an answer to Andrew's
question about what the value add is here for Hadoop to move to a new
system.  I have seen a few comments about a better review UI and
one-click patch submission, is that the main goal?

Colin

On Fri, Oct 30, 2015 at 9:19 AM, Owen O'Malley  wrote:
> It seems like there is overwhelming support for enabling the github
> integration, so I went ahead and filed the infra ticket
> .
>
> This is explicitly not changing the way that we commit on Hadoop and
> commits should be squashed and rebased rather than merged on to the master
> branch. If you want to close a pull request with a commit, just add a line
> at the end of the commit message that says:
>
> closes apache/hadoop#123
>
> If someone else wants to setup gerrit, we can evaluate it. However, I am
> skeptical that it would be so much better than Github that it would be
> worth making people learn a new tool.
>
> Thanks,
>Owen


Re: Github integration for Hadoop

2015-10-30 Thread Sean Busbey
On Fri, Oct 30, 2015 at 1:22 PM, Allen Wittenauer  wrote:
>
>> * Have we tried our precommit on PRs yet? Does it work for multiple
>> branches? Is there a way to enforce rebase+squash vs. merge on the PR,
>> since, per Allen, Yetus requires one commit to work?
>
>
> I don’t know about the Jenkins-side of things (e.g., how does Jenkins 
> trigger a build?).  As far as Yetus is concerned, here’s the functionality 
> that has been written:
>
> * Pull patches from Github by PR #
> * Comment on patches in Github, given credentials
> * Comment on specific lines in Github, given credentials
> * Test patches against the branch/repo that the pull request is 
> against
> * GH<->JIRA intelligence such that if a PR mentions an issue as the 
> leading text in the subject line or an issue mentions a PR in the comments, 
> pull from github and put a comment in both places (again, given credentials)


Jenkins builds are all driven off of the "Precommit Admin" job that
does a JQL search for enabled projects with open patches:

https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-Admin/

I believe with the current integration, that means we'll find an test
any github PRs that are mentioned in a jira ticket that is in
PATCH_AVAILABLE status.

At some point we'll need an exemplar jenkins trigger job that just
activates on open PRs, at least for ASF projects. But no such job
exists now.


-- 
Sean


Re: FYI: Major, long-standing Issue with trunk's test-patch

2015-10-30 Thread Colin P. McCabe
Thanks for finding this issue, Allen.

This is somewhat of a tangent, but I'm curious who still uses
hadoop-pipes.  Certainly development has more or less stopped on it.
I think the last 5 years of commits on it have been things like
updating version numbers, fixing missing #includes, moving to CMake...
and before that, moving Hadoop's build system to Maven.

There's a lot of questionable things in pipes, including the fact that
there are no unit tests, and it unconditionally builds with gcc -O0.
I think many people moved to using Hadoop streaming instead of Hadoop
pipes, since it works with other programming languages better.  Should
we retire this component at some point, or does it still fill a role?

best,
Colin

On Tue, Oct 27, 2015 at 7:44 PM, Allen Wittenauer  wrote:
>
> Today I discovered that an old, old code change (circa 2012) caused 
> certain maven modules to be skipped during the precommit testing.  That code 
> had been carried forward through all the rewrites, bug fixes, etc, over the 
> years likely because it seemed the correct thing to do. It is clearly not, 
> since at least hadoop-pipes was getting ignored.  I have not evaluated the 
> impact of the bug on other parts of the Hadoop code base.
>
> I’ve got sitting in the Hadoop beta test branch of Yetus what I think 
> is a potential fix to that bug.  Initial testing shows all systems are go.
>
> Additionally:
>
> * After several weeks, Yetus being used for Hadoop Common has been 
> more or as stable as trunk’s, but testing significantly more parts.
> * People are still reporting bugs on trunk’s test-patch that have 
> been fixed for months in Yetus.
> * People are still confused as to which version is running where, 
> despite the email thread just a few days ago.  (*exasperated sigh here*)
>
> So I’m going to turn on Yetus for *ALL* Hadoop precommit jobs later 
> tonight. (Given how backed up Jenkins is at the moment, there is plenty of 
> time. haha) Anyway, if you see “Powered by Yetus” in the Hadoop QA posts, 
> you’ve got Yetus.  If you don’t see it, it ran on trunk’s test-patch.
>
> (… and now begin the threads on everyone freaking out and/or not 
> reading the above …)


Re: Github integration for Hadoop

2015-10-30 Thread Andrew Wang
>
>
> >>
> >> > If that's the case, we should also look at review alternatives
> >> > like RB and Crucible.
> >>
> >> Okay by me if the community consensus supports one of them. The fact
> that
> >> they exist but no one uses them is not a ringing endorsement.
> >>
> >> HBase uses reviewboard, as do I'm sure other Apache projects.
> >reviews.apache.org existed before we had github integration. I've used
> RB a
> >fair bit, and don't mind it.
>
> I could not get RB working with the Hadoop sub-projects. Would you be
> willing to try it out on a Hadoop/HDFS Jira since you have experience with
> it?
>

Here's a RB I made with a test patch, the trick is to use "git diff
--full-index":

https://reviews.apache.org/r/39820/

I'm not clear on if there's mirroring between RB and JIRA, something to
investigate.


Re: Github integration for Hadoop

2015-10-30 Thread Allen Wittenauer

> * Have we tried our precommit on PRs yet? Does it work for multiple
> branches? Is there a way to enforce rebase+squash vs. merge on the PR,
> since, per Allen, Yetus requires one commit to work?


I don’t know about the Jenkins-side of things (e.g., how does Jenkins 
trigger a build?).  As far as Yetus is concerned, here’s the functionality that 
has been written:

* Pull patches from Github by PR #
* Comment on patches in Github, given credentials
* Comment on specific lines in Github, given credentials
* Test patches against the branch/repo that the pull request is against
* GH<->JIRA intelligence such that if a PR mentions an issue as the 
leading text in the subject line or an issue mentions a PR in the comments, 
pull from github and put a comment in both places (again, given credentials)

Re: Github integration for Hadoop

2015-10-30 Thread Andrew Wang
>
>
> > 2) Better attribution of contributors with their GH id (Arpit)
> >
>
> This doesn't happen very naturally other than the pull requests are
> typically shown in their fork of the apache repo.
>
> It happens if we used PRs for integration, but yes I agree that it does
not if GH is just used as a review tool.

>
> > 3) The backlog of PRs against our unused GH project (Owen)
> >
>
> This happens too.
>
> The backlog is from people who don't want to go through JIRA and just want
to fork and PR on github.

Enabling GH-as-code-review-tool is not going to help this community of
developers.

>
> >
> > Applying "use GH PRs as a review tool" to the above problems:
> >
> > 1) I think it has advantages, but as I said earlier, Github PRs really
> are
> > not designed for our rebase+squash workflow, since AFAIK it doesn't
> support
> > interdiffs.
> >
>
> Yes, it does. Pushing to the branch that the pull request was created from
> updates the pull request.
>

That's not interdiff support, which was my main point about differences in
workflow.

>
>
> > 3) not solved since all issues still need to start as a JIRA, or else the
> > mirroring won't work.
> >
>
> That isn't true. If you push an empty commit that tells the integration to
> close pull requests, they are closed. We could clean up the current pull
> requests now that the integration is there.
>
> Cleaning them up is good, but as I said above it doesn't help this
community of users who filed these in the first place. That is, users who
want to use GitHub but don't like JIRA.

I also searched "disable pull requests" and found this, which could help
with this issue:

https://nopullrequests.appspot.com/

>
> >
> > I'd also ask the following unknown questions:
> >
> > * Is there a way to *disable* using PRs for integration? i.e. disable the
> > ability to merge PRs?
> >
>
> That is already disabled.
>
> Great, glad to hear it. That wasn't mentioned in the email thread or on
the INFRA ticket, and the INFRA ticket mentions these integrations:

Commit Comment, Create, Issue Comment, Issues, Pull Request, Pull Request
Comment, and push, Push

Are these the right set of permissions to disable integrating PRs? Namely,
the "push" permissions look unnecessary. We should also disable GH issues
since we don't want users filing issues there.

>
> > * Have we tried our precommit on PRs yet? Does it work for multiple
> > branches? Is there a way to enforce rebase+squash vs. merge on the PR,
> > since, per Allen, Yetus requires one commit to work?
> >
>
> Allen said that it was either working or easy to make work.
>
> Great to hear, this was not mentioned on this email thread besides Allen
saying multi-commit PRs are problematic.

>
> >
> > This thread is barely 24 hours old, and I don't see why we're trying to
> > move this fast on the issue. Let's discuss some alternatives (!) and
> settle
> > on the right solution. We also haven't even broached review alternatives
> > like RB, Crucible, etc which were in the running last time this topic
> came
> > up.
> >
>
> I'm sorry that I moved too fast. There are clearly a lot of people who want
> to use it as a review tool and getting github integration enabled is easy.
> Furthermore, it isn't exclusive. There will still be people uploading
> patches on jira. This is about giving the contributors and committers the
> ability to use a popular review tool
>
> I'm onboard with improved code review tools. It was not clear this
proposal was for github *only* as a code review tool until about an hour
ago. It also wasn't clear to a lot of the people who were +1ing based on
the long-form comments.

My one sticking point is that GH remains only a code review tool until a
later discussion. So, no merging of PRs directly, and JIRAs come before
PRs. If people want to revisit this, let's discuss in a thread that does
not begin and end in 24 hours, and do a proper proposal and vote so there
aren't ambiguities or open questions before things happen.

Best,
Andrew