Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/

[Dec 20, 2017 4:55:46 PM] (weiy) HDFS-12932. Fix confusing LOG message for 
block replication. Contributed
[Dec 20, 2017 4:58:28 PM] (rohithsharmaks) YARN-7674. Update Timeline Reader 
web app address in UI2. Contributed by
[Dec 20, 2017 6:25:33 PM] (stevel) HADOOP-14965. S3a input stream "normal" 
fadvise mode to be adaptive
[Dec 20, 2017 9:39:00 PM] (rkanter) YARN-7577. Unit Fail: 
TestAMRestart#testPreemptedAMRestartOnRMRestart
[Dec 21, 2017 1:04:34 AM] (aajisaka) HDFS-12949. Fix findbugs warning in 
ImageWriter.java.
[Dec 21, 2017 2:58:34 AM] (aajisaka) HADOOP-15133. [JDK9] Ignore 
com.sun.javadoc.* and com.sun.tools.* in
[Dec 21, 2017 2:15:53 PM] (stevel) HADOOP-15113. NPE in S3A getFileStatus: null 
instrumentation on using
[Dec 21, 2017 2:58:58 PM] (stevel) HADOOP-13282. S3 blob etags to be made 
visible in S3A




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference of replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:[line 210] 

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem 
   hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.cli.TestErasureCodingCLI 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.TestReplication 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.TestEncryptedTransfer 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.TestDecommission 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.hdfs.client.impl.TestBlockReaderFactory 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
   hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart 
   hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator 
   hadoop.mapreduce.v2.TestUberAM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/629/artifact/out/whitespace-tabs.txt
  [292K]

   findbugs:

   
https://builds.apache.org/job/h

[jira] [Created] (HADOOP-15139) [Umbrella] Improvements and fixes for Hadoop shaded client work

2017-12-21 Thread Junping Du (JIRA)
Junping Du created HADOOP-15139:
---

 Summary: [Umbrella] Improvements and fixes for Hadoop shaded 
client work 
 Key: HADOOP-15139
 URL: https://issues.apache.org/jira/browse/HADOOP-15139
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Junping Du
Priority: Critical


In HADOOP-11656, we have made great progress in splitting out third-party 
dependencies from shaded hadoop client jar (hadoop-client-api), put runtime 
dependencies in hadoop-client-runtime, and have shaded version of 
hadoop-client-minicluster for test. However, there are still some left work for 
this feature to be fully completed:
- We don't have a comprehensive documentation to guide downstream 
projects/users to use shaded JARs instead of previous JARs
- We should consider to wrap up hadoop tools (distcp, aws, azure) to have 
shaded version
- More issues could be identified when shaded jars are adopted in more test and 
production environment, like HADOOP-15137.

Let's have this umbrella JIRA to track all efforts that left to improve hadoop 
shaded client effort.

CC [~busbey], [~bharatviswa] and [~vinodkv].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15138) CLONE - Executing the command 'hdfs -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails if permission is denied to some files

2017-12-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15138.
-
Resolution: Duplicate

Closing this as a duplicate of the original, which is the same problem and 
still outstanding. Please discuss solutions and submit patches there. thanks

> CLONE - Executing the command 'hdfs 
> -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails if 
> permission is denied to some files
> ---
>
> Key: HADOOP-15138
> URL: https://issues.apache.org/jira/browse/HADOOP-15138
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, hdfs-client, security
>Affects Versions: 2.8.0
>Reporter: Fan
>Priority: Critical
>  Labels: features
>
> === 
> Request Use Case: 
> UC1: 
> The customer has the path to a directory and subdirectories full of keys. The 
> customer knows that he does not have the access to all the keys, but ignoring 
> this problem, the customer makes a list of the keys. 
> UC1.2: 
> The customer in a FIFO manner, try his access to the key provided on the 
> list. If the access is granted locally then he can try the login on the s3a. 
> UC1.2: 
> The customer in a FIFO manner, try his access to the key provided on the 
> list. If the access is not granted locally then he will skip the login on the 
> s3a and try the next key on the list. 
> ===
> For now, the UC1.2 fails with below exception and does not try the next key:
> {code}
> $ hdfs  --loglevel DEBUG dfs 
> -Dhadoop.security.credential.provider.path=jceks://hdfs/tmp/aws.jceks,jceks://hdfs/tmp/awst.jceks
>  -ls s3a://av-dl-hwx-nprod-anhffpoc-enriched/hive/e_ceod/
> Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=502549376, access=READ, 
> inode="/tmp/aws.jceks":admin:hdfs:-rwx--
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-12-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/75/

[Dec 19, 2017 5:21:56 PM] (varunsaxena) YARN-7662. [ATSv2] Define new set of 
configurations for reader and
[Dec 20, 2017 6:09:38 AM] (sunilg) YARN-7032. [ATSv2] NPE while starting hbase 
co-processor when HBase




-1 overall


The following subsystems voted -1:
asflicense unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Unreaped Processes :

   hadoop-common:1 
   hadoop-hdfs:23 
   hadoop-yarn-server-timelineservice:1 
   hadoop-yarn-client:6 
   hadoop-yarn-applications-distributedshell:1 
   hadoop-mapreduce-client-jobclient:13 
   hadoop-distcp:3 
   hadoop-extras:1 

Failed junit tests :

   hadoop.hdfs.web.TestHttpsFileSystem 
   hadoop.hdfs.web.TestWebHdfsFileSystemContract 
   hadoop.hdfs.web.TestWebHDFSAcl 
   hadoop.hdfs.TestDatanodeReport 
   hadoop.hdfs.web.TestHftpFileSystem 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.mapreduce.security.ssl.TestEncryptedShuffle 
   hadoop.tools.TestDistCpSystem 
   hadoop.tools.TestIntegration 
   hadoop.tools.TestDistCpViewFs 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 

Timed out junit tests :

   org.apache.hadoop.log.TestLogLevel 
   org.apache.hadoop.hdfs.TestLeaseRecovery2 
   org.apache.hadoop.hdfs.TestRead 
   org.apache.hadoop.hdfs.web.TestWebHdfsTokens 
   org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream 
   org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade 
   org.apache.hadoop.hdfs.TestFileAppendRestart 
   org.apache.hadoop.hdfs.security.TestDelegationToken 
   org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter 
   org.apache.hadoop.hdfs.TestDFSMkdirs 
   org.apache.hadoop.hdfs.TestDFSOutputStream 
   org.apache.hadoop.security.TestRefreshUserMappings 
   org.apache.hadoop.hdfs.web.TestWebHDFS 
   org.apache.hadoop.hdfs.web.TestWebHDFSXAttr 
   org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes 
   org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs 
   org.apache.hadoop.hdfs.TestDistributedFileSystem 
   org.apache.hadoop.hdfs.web.TestWebHDFSForHA 
   org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication 
   org.apache.hadoop.hdfs.TestDFSShell 
   
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServices
 
   org.apache.hadoop.yarn.client.TestRMFailover 
   org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClientWithReservation 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClient 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   org.apache.hadoop.yarn.client.api.impl.TestNMClient 
   
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell 
   org.apache.hadoop.fs.TestFileSystem 
   org.apache.hadoop.mapred.TestMiniMRClasspath 
   org.apache.hadoop.mapred.TestClusterMapReduceTestCase 
   org.apache.hadoop.mapreduce.security.TestMRCredentials 
   org.apache.hadoop.mapred.TestJobSysDirWithDFS 
   org.apache.hadoop.fs.TestDFSIO 
   org.apache.hadoop.mapreduce.security.TestBinaryTokenFile 
   org.apache.hadoop.mapred.TestMRTimelineEventHandling 
   org.apache.hadoop.mapred.join.TestDatamerge 
   org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers 
   org.apache.hadoop.mapred.TestLazyOutput 
   org.apache.hadoop.mapred.TestReduceFetch 
   org.apache.hadoop.conf.TestNoDefaultsJobConf 
   org.apache.hadoop.tools.TestDistCpSync 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromTarget 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromSource 
   org.apache.hadoop.tools.TestCopyFiles 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/75/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/75/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/75/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/75/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/75/artifact/out/diff-patch-shellcheck.txt
  [76K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/75/art

[jira] [Created] (HADOOP-15138) CLONE - Executing the command 'hdfs -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails if permission is denied to some files

2017-12-21 Thread Fan (JIRA)
Fan created HADOOP-15138:


 Summary: CLONE - Executing the command 'hdfs 
-Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails if 
permission is denied to some files
 Key: HADOOP-15138
 URL: https://issues.apache.org/jira/browse/HADOOP-15138
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3, hdfs-client, security
Affects Versions: 2.8.0
Reporter: Fan
Priority: Critical


=== 
Request Use Case: 
UC1: 
The customer has the path to a directory and subdirectories full of keys. The 
customer knows that he does not have the access to all the keys, but ignoring 
this problem, the customer makes a list of the keys. 

UC1.2: 
The customer in a FIFO manner, try his access to the key provided on the list. 
If the access is granted locally then he can try the login on the s3a. 

UC1.2: 
The customer in a FIFO manner, try his access to the key provided on the list. 
If the access is not granted locally then he will skip the login on the s3a and 
try the next key on the list. 
===

For now, the UC1.2 fails with below exception and does not try the next key:
{code}
$ hdfs  --loglevel DEBUG dfs 
-Dhadoop.security.credential.provider.path=jceks://hdfs/tmp/aws.jceks,jceks://hdfs/tmp/awst.jceks
 -ls s3a://av-dl-hwx-nprod-anhffpoc-enriched/hive/e_ceod/

Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=502549376, access=READ, 
inode="/tmp/aws.jceks":admin:hdfs:-rwx--
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org