Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-09-01 Thread Arun Suresh
+1 (binding).

Thanks for driving this Andrew..

* Download and built from source.
* Setup a 5 mode cluster.
* Verified that MR works with opportunistic containers
* Verified that the AMRMClient supports 'allocationRequestId'

Cheers
-Arun

On Thu, Sep 1, 2016 at 4:31 PM, Aaron Fabbri  wrote:

> +1, non-binding.
>
> I built everything on OS X and ran the s3a contract tests successfully:
>
> mvn test -Dtest=org.apache.hadoop.fs.contract.s3a.\*
>
> ...
>
> Results :
>
>
> Tests run: 78, Failures: 0, Errors: 0, Skipped: 1
>
>
> [INFO]
> 
>
> [INFO] BUILD SUCCESS
>
> [INFO]
> 
>
> On Thu, Sep 1, 2016 at 3:39 PM, Andrew Wang 
> wrote:
>
> > Good point Allen, I forgot about `hadoop version`. Since it's populated
> by
> > a version-info.properties file, people can always cat that file.
> >
> > On Thu, Sep 1, 2016 at 3:21 PM, Allen Wittenauer <
> a...@effectivemachines.com
> > >
> > wrote:
> >
> > >
> > > > On Sep 1, 2016, at 3:18 PM, Allen Wittenauer <
> a...@effectivemachines.com
> > >
> > > wrote:
> > > >
> > > >
> > > >> On Sep 1, 2016, at 2:57 PM, Andrew Wang 
> > > wrote:
> > > >>
> > > >> Steve requested a git hash for this release. This led us into a
> brief
> > > >> discussion of our use of git tags, wherein we realized that although
> > > >> release tags are immutable (start with "rel/"), RC tags are not.
> This
> > is
> > > >> based on the HowToRelease instructions.
> > > >
> > > >   We should probably embed the git hash in one of the files that
> > > gets gpg signed.  That's an easy change to create-release.
> > >
> > >
> > > (Well, one more easily accessible than 'hadoop version')
> >
>


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-09-01 Thread Aaron Fabbri
+1, non-binding.

I built everything on OS X and ran the s3a contract tests successfully:

mvn test -Dtest=org.apache.hadoop.fs.contract.s3a.\*

...

Results :


Tests run: 78, Failures: 0, Errors: 0, Skipped: 1


[INFO]


[INFO] BUILD SUCCESS

[INFO]


On Thu, Sep 1, 2016 at 3:39 PM, Andrew Wang 
wrote:

> Good point Allen, I forgot about `hadoop version`. Since it's populated by
> a version-info.properties file, people can always cat that file.
>
> On Thu, Sep 1, 2016 at 3:21 PM, Allen Wittenauer  >
> wrote:
>
> >
> > > On Sep 1, 2016, at 3:18 PM, Allen Wittenauer  >
> > wrote:
> > >
> > >
> > >> On Sep 1, 2016, at 2:57 PM, Andrew Wang 
> > wrote:
> > >>
> > >> Steve requested a git hash for this release. This led us into a brief
> > >> discussion of our use of git tags, wherein we realized that although
> > >> release tags are immutable (start with "rel/"), RC tags are not. This
> is
> > >> based on the HowToRelease instructions.
> > >
> > >   We should probably embed the git hash in one of the files that
> > gets gpg signed.  That's an easy change to create-release.
> >
> >
> > (Well, one more easily accessible than 'hadoop version')
>


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-09-01 Thread Andrew Wang
Good point Allen, I forgot about `hadoop version`. Since it's populated by
a version-info.properties file, people can always cat that file.

On Thu, Sep 1, 2016 at 3:21 PM, Allen Wittenauer 
wrote:

>
> > On Sep 1, 2016, at 3:18 PM, Allen Wittenauer 
> wrote:
> >
> >
> >> On Sep 1, 2016, at 2:57 PM, Andrew Wang 
> wrote:
> >>
> >> Steve requested a git hash for this release. This led us into a brief
> >> discussion of our use of git tags, wherein we realized that although
> >> release tags are immutable (start with "rel/"), RC tags are not. This is
> >> based on the HowToRelease instructions.
> >
> >   We should probably embed the git hash in one of the files that
> gets gpg signed.  That's an easy change to create-release.
>
>
> (Well, one more easily accessible than 'hadoop version')


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-09-01 Thread Allen Wittenauer

> On Sep 1, 2016, at 3:18 PM, Allen Wittenauer  
> wrote:
> 
> 
>> On Sep 1, 2016, at 2:57 PM, Andrew Wang  wrote:
>> 
>> Steve requested a git hash for this release. This led us into a brief
>> discussion of our use of git tags, wherein we realized that although
>> release tags are immutable (start with "rel/"), RC tags are not. This is
>> based on the HowToRelease instructions.
> 
>   We should probably embed the git hash in one of the files that gets gpg 
> signed.  That's an easy change to create-release.


(Well, one more easily accessible than 'hadoop version')
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-09-01 Thread Allen Wittenauer

> On Sep 1, 2016, at 2:57 PM, Andrew Wang  wrote:
> 
> Steve requested a git hash for this release. This led us into a brief
> discussion of our use of git tags, wherein we realized that although
> release tags are immutable (start with "rel/"), RC tags are not. This is
> based on the HowToRelease instructions.

We should probably embed the git hash in one of the files that gets gpg 
signed.  That's an easy change to create-release.




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-09-01 Thread Chen He
+1 non-binding

download source and build successfully;
deploy to single node cluster;
run wordcount and loadgen passed;
Verified that MR-6336 is there and FileOutputCommitter is using algorithm 2
by default.

Best Regards!

Chen


On Wed, Aug 31, 2016 at 9:25 PM, Rakesh Radhakrishnan 
wrote:

> Thanks for getting this out.
>
> +1 (non-binding)
>
> - downloaded and built tarball from source
> - deployed HDFS-HA cluster and tested few EC file operations
> - executed few hdfs commands including EC commands
> - viewed basic UI
> - ran some of the sample jobs
>
>
> Best Regards,
> Rakesh
> Intel
>
> On Thu, Sep 1, 2016 at 6:19 AM, John Zhuge  wrote:
>
> > +1 (non-binding)
> >
> > - Build source with Java 1.8.0_101 on Centos 6.6 without native
> > - Verify license and notice using the shell script in HADOOP-13374
> > - Deploy a pseudo cluster
> > - Run basic dfs, distcp, ACL, webhdfs commands
> > - Run MapReduce workcount and pi examples
> > - Run balancer
> >
> > Thanks,
> > John
> >
> > John Zhuge
> > Software Engineer, Cloudera
> >
> > On Wed, Aug 31, 2016 at 11:46 AM, Gangumalla, Uma <
> > uma.ganguma...@intel.com>
> > wrote:
> >
> > > +1 (binding).
> > >
> > > Overall it¹s a great effort, Andrew. Thank you for putting all the
> > energy.
> > >
> > > Downloaded and built.
> > > Ran some sample jobs.
> > >
> > > I would love to see all this efforts will lead to get the GA from
> Hadoop
> > > 3.X soon.
> > >
> > > Regards,
> > > Uma
> > >
> > >
> > > On 8/30/16, 8:51 AM, "Andrew Wang"  wrote:
> > >
> > > >Hi all,
> > > >
> > > >Thanks to the combined work of many, many contributors, here's an RC0
> > for
> > > >3.0.0-alpha1:
> > > >
> > > >http://home.apache.org/~wang/3.0.0-alpha1-RC0/
> > > >
> > > >alpha1 is the first in a series of planned alpha releases leading up
> to
> > > >GA.
> > > >The objective is to get an artifact out to downstreams for testing and
> > to
> > > >iterate quickly based on their feedback. So, please keep that in mind
> > when
> > > >voting; hopefully most issues can be addressed by future alphas rather
> > > >than
> > > >future RCs.
> > > >
> > > >Sorry for getting this out on a Tuesday, but I'd still like this vote
> to
> > > >run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll
> > extend
> > > >if we lack the votes.
> > > >
> > > >Please try it out and let me know what you think.
> > > >
> > > >Best,
> > > >Andrew
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-09-01 Thread Andrew Wang
Steve requested a git hash for this release. This led us into a brief
discussion of our use of git tags, wherein we realized that although
release tags are immutable (start with "rel/"), RC tags are not. This is
based on the HowToRelease instructions.

I asked in infra.chat about this, and filed a JIRA per their request. I'll
update the tags and release instructions once we have guidance.

https://issues.apache.org/jira/browse/INFRA-12552

For now though, here's the tag I pushed (email is immutable):

object a990d2ebcd6de5d7dc2d3684930759b0f0ea4dc3
type commit
tag release-3.0.0-alpha1-RC0
tagger Andrew Wang  1472541776 -0700

Release candidate - 3.0.0-alpha1-RC0
gpg: Signature made Tue 30 Aug 2016 12:22:56 AM PDT using RSA key ID
7501105C
gpg: Good signature from "Andrew Wang (CODE SIGNING KEY) <
andrew.w...@cloudera.com>"
gpg: aka "Andrew Wang (CODE SIGNING KEY) "


On Tue, Aug 30, 2016 at 8:51 AM, Andrew Wang 
wrote:

> Hi all,
>
> Thanks to the combined work of many, many contributors, here's an RC0 for
> 3.0.0-alpha1:
>
> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>
> alpha1 is the first in a series of planned alpha releases leading up to
> GA. The objective is to get an artifact out to downstreams for testing and
> to iterate quickly based on their feedback. So, please keep that in mind
> when voting; hopefully most issues can be addressed by future alphas rather
> than future RCs.
>
> Sorry for getting this out on a Tuesday, but I'd still like this vote to
> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
> if we lack the votes.
>
> Please try it out and let me know what you think.
>
> Best,
> Andrew
>


[jira] [Created] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-01 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-13573:
-

 Summary: S3Guard: create basic contract tests for MetadataStore 
implementations
 Key: HADOOP-13573
 URL: https://issues.apache.org/jira/browse/HADOOP-13573
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.9.0
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri


We should have some contract-style unit tests for the MetadataStore interface 
to validate that the different implementations provide correct semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13570) Hadoop Swift driver should use new Apache httpclient

2016-09-01 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He resolved HADOOP-13570.
--
Resolution: Duplicate

Dup to HADOOP-11614, close it.

> Hadoop Swift driver should use new Apache httpclient
> 
>
> Key: HADOOP-13570
> URL: https://issues.apache.org/jira/browse/HADOOP-13570
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/swift
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Chen He
>
> Current Hadoop openstack module is still using apache httpclient v1.x. It is 
> too old. We need to update it to a higher version to catch up in performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13572) fs.s3native.mkdirs does not work if the user is only authorized to a subdirectory

2016-09-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13572.
-
Resolution: Duplicate

> fs.s3native.mkdirs does not work if the user is only authorized to a 
> subdirectory
> -
>
> Key: HADOOP-13572
> URL: https://issues.apache.org/jira/browse/HADOOP-13572
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Marcin Zukowski
>Priority: Minor
>
> Noticed that when working with Spark. I have an S3 bucket with top 
> directories having protected access, and a dedicated open directory deeper in 
> the tree for Spark temporary data.
> Writing to this directory fails with the following stack
> {noformat}
> [info]   org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 HEAD request failed for 
> '/SPARK-SNOWFLAKEDB' - ResponseCode=403, ResponseMessage=Forbidden
> [info]   at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleServiceException(Jets3tNativeFileSystemStore.java:245)
> [info]   at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:119)
> [info]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [info]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [info]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [info]   at java.lang.reflect.Method.invoke(Method.java:497)
> [info]   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> [info]   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> [info]   at org.apache.hadoop.fs.s3native.$Proxy34.retrieveMetadata(Unknown 
> Source)
> [info]   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:414)
> [info]   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.mkdir(NativeS3FileSystem.java:539)
> [info]   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.mkdirs(NativeS3FileSystem.java:532)
> [info]   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1933)
> [info]   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:291)
> [info]   at 
> org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:131)
> {noformat}
> I believe this is because mkdirs in NativeS3FileSystem.java tries to create 
> directories starting "from the root", and so if the process can't "list" 
> objects on a given level, it fails. Perhaps it should accept this kind of 
> failures, or go "from the leaf" first to find the level from which it needs 
> to start creating directories. That might also be good for performance 
> assuming the directories exist most of the time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13572) fs.s3native.mkdirs does not work if the user is only authorized to a subdirectory

2016-09-01 Thread Marcin Zukowski (JIRA)
Marcin Zukowski created HADOOP-13572:


 Summary: fs.s3native.mkdirs does not work if the user is only 
authorized to a subdirectory
 Key: HADOOP-13572
 URL: https://issues.apache.org/jira/browse/HADOOP-13572
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Marcin Zukowski


Noticed that when working with Spark. I have an S3 bucket with top directories 
having protected access, and a dedicated open directory deeper in the tree for 
Spark temporary data.

Writing to this directory fails with the following stack
{noformat}
[info]   org.apache.hadoop.fs.s3.S3Exception: 
org.jets3t.service.S3ServiceException: S3 HEAD request failed for 
'/SPARK-SNOWFLAKEDB' - ResponseCode=403, ResponseMessage=Forbidden
[info]   at 
org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleServiceException(Jets3tNativeFileSystemStore.java:245)
[info]   at 
org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:119)
[info]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[info]   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[info]   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[info]   at java.lang.reflect.Method.invoke(Method.java:497)
[info]   at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
[info]   at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
[info]   at org.apache.hadoop.fs.s3native.$Proxy34.retrieveMetadata(Unknown 
Source)
[info]   at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:414)
[info]   at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.mkdir(NativeS3FileSystem.java:539)
[info]   at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.mkdirs(NativeS3FileSystem.java:532)
[info]   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1933)
[info]   at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:291)
[info]   at 
org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:131)
{noformat}

I believe this is because mkdirs in NativeS3FileSystem.java tries to create 
directories starting "from the root", and so if the process can't "list" 
objects on a given level, it fails. Perhaps it should accept this kind of 
failures, or go "from the leaf" first to find the level from which it needs to 
start creating directories. That might also be good for performance assuming 
the directories exist most of the time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13569) S3AFastOutputStream to take ProgressListener in file create()

2016-09-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13569.
-
   Resolution: Duplicate
Fix Version/s: 2.9.0

> S3AFastOutputStream to take ProgressListener in file create()
> -
>
> Key: HADOOP-13569
> URL: https://issues.apache.org/jira/browse/HADOOP-13569
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0
>
>
> For scale testing I'd like more meaningful progress than the Hadoop 
> {{Progressable}} offers. 
> Proposed: having {{S3AFastOutputStream}} check to see if the progressable 
> passed in is also an instance of {{com.amazonaws.event.ProgressListener}} 
> —and if so, wire it up directly.
> This allows tests to directly track state of upload, log it and perhaps even 
> assert on it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-09-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/

[Aug 31, 2016 7:02:37 PM] (kihwal) HDFS-10729. Improve log message for edit 
loading failures caused by FS
[Aug 31, 2016 9:29:37 PM] (wang) HDFS-10784. Implement 
WebHdfsFileSystem#listStatusIterator.
[Aug 31, 2016 10:40:09 PM] (zhz) HDFS-10817. Add Logging for Long-held NN Read 
Locks. Contributed by Erik
[Sep 1, 2016 6:43:59 AM] (zhz) Addendum fix for HDFS-10817 to fix failure of 
the added




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [24K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [120K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/151/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-13571) ServerSocketUtil.getPort() should use loopback address, not 0.0.0.0

2016-09-01 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-13571:


 Summary: ServerSocketUtil.getPort() should use loopback address, 
not 0.0.0.0
 Key: HADOOP-13571
 URL: https://issues.apache.org/jira/browse/HADOOP-13571
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger


Using 0.0.0.0 to check for a free port will succeed even if there's something 
bound to that same port on the loopback interface. Since this function is used 
primarily in testing, it should be checking the loopback interface for free 
ports.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org