[jira] [Created] (HADOOP-14832) listing s3a bucket without credentials gives Interrupted error

2017-09-04 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14832:
---

 Summary: listing s3a bucket without credentials gives Interrupted 
error
 Key: HADOOP-14832
 URL: https://issues.apache.org/jira/browse/HADOOP-14832
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Priority: Minor


In trunk pseudo distributed mode, without setting s3a credentials, listing an 
s3a bucket only gives "Interrupted" error :
{noformat}
$ hadoop fs -ls s3a://bucket/
ls: Interrupted
{noformat}

In comparison, branch-2 gives a much better error message:
{noformat}
(branch-2)$ hadoop_env hadoop fs -ls s3a://bucket/
ls: doesBucketExist on hdfs-cce: com.amazonaws.AmazonClientException: No AWS 
Credentials provided by BasicAWSCredentialsProvider 
EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : 
com.amazonaws.SdkClientException: Unable to load credentials from service 
endpoint
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



DISCUSS: Hadoop Compatability Guidelines

2017-09-04 Thread Daniel Templeton
All, in prep for Hadoop 3 beta 1 I've been working on updating the 
compatibility guidelines on HADOOP-13714.  I think the initial doc is 
more or less complete, so I'd like to open the discussion up to the 
broader Hadoop community.


In the new guidelines, I have drawn some lines in the sand regarding 
compatibility between releases.  In some cases these lines are more 
restrictive than the current practices.  The intent with the new 
guidelines is not to limit progress by restricting what goes into a 
release, but rather to drive release numbering to keep in line with the 
reality of the code.


Please have a read and provide feedback on the JIRA.  I'm sure there are 
more than a couple of areas that could be improved.  If you'd rather not 
read markdown from a diff patch, I've attached PDFs of the two modified 
docs.


Thanks!
Daniel


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-09-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/

[Sep 4, 2017 2:50:41 AM] (xiao) HDFS-12383. Re-encryption updater should handle 
canceled tasks better.




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:[line 490] 

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.TestLeaseRecoveryStriped 
   hadoop.hdfs.TestClientProtocolForPipelineRecovery 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 
   hadoop.hdfs.TestFileAppendRestart 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
   
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.TestWriteConfigurationToDFS 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.tracing.TestTracing 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.mapreduce.v2.hs.webapp.TestHSWebApp 
   hadoop.yarn.sls.TestReservationSystemInvariants 
   hadoop.yarn.sls.TestSLSRunner 

Timed out junit tests :

   org.apache.hadoop.hdfs.TestWriteReadStripedFile 
   org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/diff-compile-javac-root.txt
  [292K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/whitespace-tabs.txt
  [1.2M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/patch-javadoc-root.txt
  [2.0M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/513/artifact/out/patch-unit-hadoop-common-project_had

[jira] [Created] (HADOOP-14831) Über-jira: S3a phase IV: Hadoop 3.1 features

2017-09-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14831:
---

 Summary: Über-jira: S3a phase IV: Hadoop 3.1 features
 Key: HADOOP-14831
 URL: https://issues.apache.org/jira/browse/HADOOP-14831
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran


All the S3/S3A features targeting Hadoop 3.1



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13654) S3A create() to support asynchronous check of dest & parent paths

2017-09-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13654.
-
Resolution: Won't Fix

> S3A create() to support asynchronous check of dest & parent paths
> -
>
> Key: HADOOP-13654
> URL: https://issues.apache.org/jira/browse/HADOOP-13654
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>
> One source of delays in S3A is the need to check if a destination path exists 
> in create; this makes sure the operation isn't trying to overwrite a 
> directory.
> #. This is slow, 1-4 HTTPS requests
> # The code doesn't seem to check the entire parent path to make sure there 
> isn't a file as a parent (which raises the question: shouldn't we have a 
> contract test for this?)
> # Even with the create overwrite=false check, the fact that the new object 
> isn't created until the output stream is close()'d, means that the check has 
> race conditions.
> Instead of doing a synchronous check in create(), we could do an asynchronous 
> check of the parent directory tree. If any error surfaced, this could be 
> cached and then thrown on the next call to: write(), flush() or close(); that 
> is, the failure of a create due to path problems would not surface 
> immediately on the create() call, *but before any writes were committed*.
> The full directory tree can/should be checked, and is results remembered. 
> This would allow for the post-commit cleanup to issue delete() requests 
> purely for those paths (if any) which referred to directories.
> As well as the need to use the AWS thread pool, there's a bit of complexity 
> with cancelling multipart uploads: the output stream needs to know that the 
> request failed, and that the multipart should be aborted.
> If the complexity of the asynchronous calls can be coped with, *and client 
> code happy to accept errors in the any IO call to the output stream*, then 
> the initial overhead at file creation could be skipped.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org