Build failed in Jenkins: Hadoop-common-trunk-Java8 #292

2015-08-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-common-trunk-Java8/292/changes

Changes:

[aajisaka] HDFS-8852. HDFS architecture documentation of version 2.x is 
outdated about append write support. Contributed by Ajith S.

[zxu] YARN-3857: Memory leak in ResourceManager with SIMPLE mode. Contributed 
by mujunchao.

[zxu] YARN-4057. If ContainersMonitor is not enabled, only print related log 
info one time. Contributed by Jun Gong.

[benoy] hadoop-12050. Enable MaxInactiveInterval for hadoop http auth token. 
Contributed by Huizhi Lu.

[cdouglas] HDFS-8435. Support CreateFlag in WebHDFS. Contributed by Jakob Homan

[szetszwo] HDFS-8826. In Balancer, add an option to specify the source node 
list so that balancer only selects blocks to move from those nodes.

[xgong] YARN-4028. AppBlock page key update and diagnostics value null on

--
[...truncated 3922 lines...]
[INFO] preparing 'analyze-report' report requires 'test-compile' forked phase 
execution
[INFO] 
[INFO]  maven-dependency-plugin:2.8:analyze-report @ hadoop-auth 
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-auth ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-auth ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/src/main/resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-auth ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-auth ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/src/test/resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-auth ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO]  maven-dependency-plugin:2.8:analyze-report @ hadoop-auth 
[WARNING] No project URL defined - decoration links will not be relativized!
[INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
skin.
[INFO] Rendering 4 Doxia documents: 4 markdown
[INFO] Generating Dependency Analysis report  --- 
maven-dependency-plugin:2.8:analyze-report
[INFO] 
[INFO] --- maven-project-info-reports-plugin:2.7:dependencies (default) @ 
hadoop-auth ---
[ERROR] Artifact: jdk.tools:jdk.tools:jar:1.8 has no file.
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-auth ---
[INFO] 
Loading source files for package 
org.apache.hadoop.security.authentication.server...
Loading source files for package 
org.apache.hadoop.security.authentication.client...
Loading source files for package 
org.apache.hadoop.security.authentication.util...
Loading source files for package org.apache.hadoop.util...
Constructing Javadoc information...
Standard Doclet version 1.8.0
Building tree for all the packages and classes...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.html...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/server/AuthenticationFilter.html...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/server/AuthenticationHandler.html...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/server/AuthenticationToken.html...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.html...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.html...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.html...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/client/AuthenticatedURL.html...
Generating 
https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-auth/target/org/apache/hadoop/security/authentication/client/AuthenticatedURL.Token.html...
Generating 

Re: [DISCUSS] Proposal for allowing merge workflows on feature branches

2015-08-19 Thread Steve Loughran
LGTM. This all needs to go into the hadoop docs somewhere too, if/when the vote 
completes

 On 18 Aug 2015, at 14:57, Andrew Wang andrew.w...@cloudera.com wrote:
 
 Hi common-dev,
 
 Based on the prior [DISCUSS] thread, I've put together a new [VOTE]
 proposal which modifies the branch development practices edified by the
 [VOTE] when we switched from SVN to git [1]. This new proposal modifies the
 third and fourth points of the earlier [VOTE], copied here for your
 convenience:
 
 3. Force-push on feature-branches is allowed. Before pulling in a feature,
 the feature-branch should be rebased on latest trunk and the changes
 applied to trunk through git rebase --onto or git cherry-pick
 commit-range.
 
 4. Every time a feature branch is rebased on trunk, a tag that identifies
 the state before the rebase needs to be created (e.g.
 tag_feature_JIRA-2454_2014-08-07_rebase). These tags can be deleted once
 the feature is pulled into trunk and the tags are no longer useful.
 
 Said new proposal expands and modifies as follows:
 
 
 
 Feature branch development can use either a merge or rebase workflow, as
 decided by contributors working on the branch.
 
 When using a rebase workflow, the feature branch is periodically rebased on
 trunk via git rebase trunk and force pushed.
 
 Before performing a force-push, a tag should be created of the current
 feature branch HEAD to preserve history. The tag should identify the
 feature and date of most recent commit, e.g.
 tag_feature_HDFS-7285_2015-08-11. It can also be convenient to use a
 temporary branch to review rebase conflict resolution before force-pushing
 the main feature branch, e.g. HDFS-7285-rebase. Temporary branches should
 be deleted after they are force-pushed over the feature branch.
 
 Developers are allowed to squash and reorder commits to make rebasing
 easier. Use this judiciously. When squashing, please maintain the original
 commit messages in the squashed commit message to preserve history.
 
 When using a merge workflow, changes are periodically integrated from trunk
 to the branch via git merge trunk.
 
 Merge conflict resolution can be reviewed by posting the diff of the merge
 commit.
 
 For both rebase and merge workflows, integration of the branch into trunk
 should happen via git merge --no-ff. --no-ff is important since it
 generates a merge commit even if the branch applies cleanly on top of
 trunk. This clearly denotes the set of commits that were made on the
 branch, and makes it easier to revert the branch if necessary.
 
 git merge --no-ff is also the preferred way of integrating a feature
 branch to other branches, e.g. branch-2.
 
 
 
 LMK what you think about the above, when we finalize I'll kick off a
 [VOTE]. Last time we did Adoption of New Codebase but this feels more
 like Modifying bylaws, if it needs a [VOTE] at all. Bylaws is easier,
 since it's just a lazy majority of active PMC members rather than 2/3rds.
 
 If the eventual [VOTE] passes, I'll put it on the wiki somewhere for easier
 reference. I'll also expand said wiki page with discussion about merge vs.
 rebase based on the mailing list threads, since I think we've got some good
 information here.
 
 Best,
 Andrew
 
 [1]:
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201408.mbox/%3CCALwhT94Y64M9keY25Ry_QOLUSZQT29tJQ95twsoa8xXrcNTxpQ%40mail.gmail.com%3E



Re: IPv6 Feature branch

2015-08-19 Thread Colin McCabe
+1, would be great to see Hadoop get ipv6 support.

Colin

On Mon, Aug 17, 2015 at 5:04 PM, Elliott Clark ecl...@apache.org wrote:
 Nate (nkedel) and I have been working on IPv6 on Hadoop and HBase lately.
 We're getting somewhere but there are a lot of different places that make
 assumptions about network. That means there will be a good deal of follow
 on patches as we find more and more places that need some TLC.

 Would a feature branch be good here so that we can move quickly without
 hurting stability until all of the issues are done?

 Thoughts? Comments?

 Thanks


Re: Doubts about review and validation process

2015-08-19 Thread Colin P. McCabe
Hi Augusto,

Sorry that we haven't gotten back to you in a while.  This isn't my
area of expertise, but hopefully someone will step forward to look at
the MR stuff.

It seems like maybe you need a design document for the preemption
stuff.  Each JIRA gives a small piece of the puzzle, but it's hard to
fit them together without some background.

best,
Colin

On Mon, Aug 10, 2015 at 6:32 AM, Augusto Souza augustorso...@gmail.com wrote:
 Hello,

 I have been working on continuing Carlo Curino and Chris Douglas
 contribution on https://issues.apache.org/jira/browse/MAPREDUCE-5269. After
 some iterations, we decided to split this patch into smaller ones to make
 easier for reviewers to check it and to pass Jenkins validations.

 The first two sub-products of this splitting are
 https://issues.apache.org/jira/browse/MAPREDUCE-6434and
 https://issues.apache.org/jira/browse/MAPREDUCE-6444 (and I am planning
 another one to finish the work that depends on these two).

 I have some doubts related to the review process for each of these
 improvements:

 1) *MAPREDUCE-6434*: How can I have the patch accepted now that it is
 approved by Jenkins and passed every test?

 2) *MAPREDUCE-6444*: How can I solve checkstyle issues related to the
 number of parameters in some methods I created (more than seven)? I noticed
 there are other methods in hadoop source code with more than 7 parameters,
 the most part of my changes were in old methods that already had more than
 7 parameters and I added other parameters to it and got caught by
 checkstyle validation.

 Thanks in advance!

 Best regards,
 Augusto Souza


Re: IPv6 Feature branch

2015-08-19 Thread Chris Douglas
+1

I don't suppose you could be conned into fixing multi-NIC and other
networking issues also? ;)

Do you have a list of contributors who plan to work on this feature? -C

On Mon, Aug 17, 2015 at 5:04 PM, Elliott Clark ecl...@apache.org wrote:
 Nate (nkedel) and I have been working on IPv6 on Hadoop and HBase lately.
 We're getting somewhere but there are a lot of different places that make
 assumptions about network. That means there will be a good deal of follow
 on patches as we find more and more places that need some TLC.

 Would a feature branch be good here so that we can move quickly without
 hurting stability until all of the issues are done?

 Thoughts? Comments?

 Thanks


Re: IPv6 Feature branch

2015-08-19 Thread Steve Loughran

+1 for a branch; ideally not too long lived. Dhruba still has commit rights, 
perhaps he could be persuaded to help with it.

If you look at the Hadoop code, it's not just network assumptions, it's fairly 
brittle to bad network setup -and not helpful when these situations arise. What 
could be good as part of this/a slideline is to have some entry point where you 
can probe network setup, failing fast (with an error code) if a condition isn't 
met (e.g. the JVM doesn't get an IPv6 address, or its address doesn't match 
that you get when you look up the hostname)

 On 17 Aug 2015, at 17:04, Elliott Clark ecl...@apache.org wrote:
 
 Nate (nkedel) and I have been working on IPv6 on Hadoop and HBase lately.
 We're getting somewhere but there are a lot of different places that make
 assumptions about network. That means there will be a good deal of follow
 on patches as we find more and more places that need some TLC.
 
 Would a feature branch be good here so that we can move quickly without
 hurting stability until all of the issues are done?
 
 Thoughts? Comments?
 
 Thanks



[jira] [Created] (HADOOP-12343) Error message of Swift driver should be more clear when there is mal-format of hostname and service

2015-08-19 Thread Chen He (JIRA)
Chen He created HADOOP-12343:


 Summary: Error message of Swift driver should be more clear when 
there is mal-format of hostname and service
 Key: HADOOP-12343
 URL: https://issues.apache.org/jira/browse/HADOOP-12343
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 2.7.1
Reporter: Chen He
Assignee: Chen He


Swift driver reports:Invalid swift hostname 'null', hostname must in form 
container.service if the container name does not follow RFC952. However, the 
container or service name is not 'null'. The error message should be more clear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #1593

2015-08-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1593/changes

Changes:

[aajisaka] HDFS-8852. HDFS architecture documentation of version 2.x is 
outdated about append write support. Contributed by Ajith S.

[zxu] YARN-3857: Memory leak in ResourceManager with SIMPLE mode. Contributed 
by mujunchao.

[zxu] YARN-4057. If ContainersMonitor is not enabled, only print related log 
info one time. Contributed by Jun Gong.

[benoy] hadoop-12050. Enable MaxInactiveInterval for hadoop http auth token. 
Contributed by Huizhi Lu.

[cdouglas] HDFS-8435. Support CreateFlag in WebHDFS. Contributed by Jakob Homan

[szetszwo] HDFS-8826. In Balancer, add an option to specify the source node 
list so that balancer only selects blocks to move from those nodes.

[xgong] YARN-4028. AppBlock page key update and diagnostics value null on

--
[...truncated 5437 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.438 sec - in 
org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.424 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.574 sec - in 
org.apache.hadoop.metrics2.source.TestJvmMetrics
Running org.apache.hadoop.metrics2.sink.TestFileSink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.517 sec - in 
org.apache.hadoop.metrics2.sink.TestFileSink
Running org.apache.hadoop.metrics2.sink.ganglia.TestGangliaSink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.288 sec - in 
org.apache.hadoop.metrics2.sink.ganglia.TestGangliaSink
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.451 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.log.TestLogLevel
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.852 sec - in 
org.apache.hadoop.log.TestLogLevel
Running org.apache.hadoop.log.TestLog4Json
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.43 sec - in 
org.apache.hadoop.log.TestLog4Json
Running org.apache.hadoop.jmx.TestJMXJsonServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.125 sec - in 
org.apache.hadoop.jmx.TestJMXJsonServlet
Running org.apache.hadoop.ipc.TestIPCServerResponder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.496 sec - in 
org.apache.hadoop.ipc.TestIPCServerResponder
Running org.apache.hadoop.ipc.TestRPCWaitForProxy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.288 sec - in 
org.apache.hadoop.ipc.TestRPCWaitForProxy
Running org.apache.hadoop.ipc.TestSocketFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.335 sec - in 
org.apache.hadoop.ipc.TestSocketFactory
Running org.apache.hadoop.ipc.TestCallQueueManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.548 sec - in 
org.apache.hadoop.ipc.TestCallQueueManager
Running org.apache.hadoop.ipc.TestIdentityProviders
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.645 sec - in 
org.apache.hadoop.ipc.TestIdentityProviders
Running org.apache.hadoop.ipc.TestWeightedRoundRobinMultiplexer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.692 sec - in 
org.apache.hadoop.ipc.TestWeightedRoundRobinMultiplexer
Running org.apache.hadoop.ipc.TestRPCCompatibility
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.152 sec - in 
org.apache.hadoop.ipc.TestRPCCompatibility
Running org.apache.hadoop.ipc.TestProtoBufRpc
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.289 sec - in 
org.apache.hadoop.ipc.TestProtoBufRpc
Running org.apache.hadoop.ipc.TestMultipleProtocolServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.092 sec - in 
org.apache.hadoop.ipc.TestMultipleProtocolServer
Running org.apache.hadoop.ipc.TestRPCCallBenchmark
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.067 sec - in 
org.apache.hadoop.ipc.TestRPCCallBenchmark
Running org.apache.hadoop.ipc.TestRetryCacheMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.453 sec - in 
org.apache.hadoop.ipc.TestRetryCacheMetrics
Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.965 sec - in 
org.apache.hadoop.ipc.TestMiniRPCBenchmark
Running org.apache.hadoop.ipc.TestIPC
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.542 sec - 
in org.apache.hadoop.ipc.TestIPC
Running org.apache.hadoop.ipc.TestDecayRpcScheduler
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.242 sec - in 
org.apache.hadoop.ipc.TestDecayRpcScheduler
Running org.apache.hadoop.ipc.TestFairCallQueue
Tests run: 22, Failures: 0, Errors: 0, 

[jira] [Created] (HADOOP-12344) validateSocketPathSecurity0 message could be better

2015-08-19 Thread Casey Brotherton (JIRA)
Casey Brotherton created HADOOP-12344:
-

 Summary: validateSocketPathSecurity0 message could be better
 Key: HADOOP-12344
 URL: https://issues.apache.org/jira/browse/HADOOP-12344
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Reporter: Casey Brotherton
Assignee: Casey Brotherton
Priority: Trivial


When a socket path does not have the correct permissions, an error is thrown.

That error just has the failing component of the path and not the entire path 
of the socket.

The entire path of the socket could be printed out to allow for a direct check 
of the permissions of the entire path.

{code}
java.io.IOException: the path component: '/' is world-writable.  Its 
permissions are 0077.  Please fix this or select a different socket path.
at 
org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native 
Method)
at 
org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189)
...
{code}

The error message could also provide the socket path:
{code}
java.io.IOException: the path component: '/' is world-writable.  Its 
permissions are 0077.  Please fix this or select a different socket path than 
'/var/run/hdfs-sockets/dn'
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)