[jira] [Created] (HADOOP-16834) Replace com.sun.istack.Nullable with javax.annotation.Nullable in DNS.java

2020-01-30 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16834:
--

 Summary: Replace com.sun.istack.Nullable with 
javax.annotation.Nullable in DNS.java
 Key: HADOOP-16834
 URL: https://issues.apache.org/jira/browse/HADOOP-16834
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


com.sun.istack.Nullable is used in only DNS.java and javax.annotation.Nullable 
is widely used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1397/

[Jan 30, 2020 5:10:31 AM] (github) YARN-9743. [JDK11] 
TestTimelineWebServices.testContextFactory fails.
[Jan 30, 2020 10:16:51 AM] (github) HADOOP-16801. S3Guard listFiles will not 
query S3 if all listings are
[Jan 30, 2020 3:03:38 PM] (snemeth) YARN-10099. FS-CS converter: handle 
allow-undeclared-pools and
[Jan 31, 2020 12:59:36 AM] (gifuma) YARN-8982. [Router] Add locality policy. 
Contributed by Young Chen.

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1396/

[Jan 29, 2020 1:47:49 AM] (yufei) YARN-10015. Correct the sample command in SLS 
README file. Contributed
[Jan 29, 2020 3:54:43 AM] (ebadger) YARN-10084. Allow inheritance of max app 
lifetime / default app
[Jan 29, 2020 8:00:00 AM] (pjoseph) YARN-10107. Fix 
GpuResourcePlugin#getNMResourceInfo to honor Auto
[Jan 29, 2020 1:42:52 PM] (snemeth) MAPREDUCE-7260. Cross origin request 
support for Job history server web
[Jan 29, 2020 4:54:45 PM] (ericp) MAPREDUCE-7079: JobHistory#ServiceStop 
implementation is incorrect.
[Jan 29, 2020 5:00:27 PM] (kihwal) HDFS-15146. 
TestBalancerRPCDelay.testBalancerRPCDelay fails




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.TestDecommissionWithStriped 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBas

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-01-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/

[Jan 29, 2020 5:07:50 PM] (kihwal) HDFS-15146. 
TestBalancerRPCDelay.testBalancerRPCDelay fails
[Jan 29, 2020 5:33:12 PM] (ericp) MAPREDUCE-7079: JobHistory#ServiceStop 
implementation is incorrect.
[Jan 29, 2020 8:44:12 PM] (cliang) HDFS-15118. [SBN Read] Slow clients when 
Observer reads are enabled but




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.ipc.TestIPC 
   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/582/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2

[jira] [Resolved] (HADOOP-16801) S3Guard queries S3 with recursive file listings

2020-01-30 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16801.
-
Resolution: Fixed

> S3Guard queries S3 with recursive file listings
> ---
>
> Key: HADOOP-16801
> URL: https://issues.apache.org/jira/browse/HADOOP-16801
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Minor
> Attachments: HADOOP-aws-no-prefetch.prelim.patch
>
>
> S3Guard does not respect authoritative metadatastore when listFiles is used 
> with recursive=true. It queries S3 even when given directory tree is 1-level 
> with no nested directories and the parent directory listing is authoritative. 
> S3Guard should check the listings in given directory tree for 
> authoritativeness and not query S3 when all listings in the tree are marked 
> as authoritative in metadata table (given metadatastore is configured to be 
> authoritative.
> Below is the description of how the current code works:
> S3AFileSystem#listFiles with recursive option, queries S3 even when directory 
> listing is authoritative. FileStatusListingIterator is created with given 
> entries from metadata store 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java#L126]
>  . However, FileStatusListingIterator has an ObjectListingIterator that 
> prefetches from s3 regardless of authoritative listing. We observed this 
> behavior when using DynamDBMetadataStore.
> I suppressed the unnecessary S3 calls by providing a dumb listing iterator to 
> listFiles call in the provided patch. Obviously this is not a solution. Just 
> demonstrating the source of the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org