Re: Moving to Yetus 0.12.0

2020-04-27 Thread Akira Ajisaka
Hi folks,

After upgrading the image from Ubuntu 16.04 to Ubuntu 18.04 (
https://issues.apache.org/jira/browse/HADOOP-16054), there are 1000+ failed
unit tests in the daily qbt job (
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1482/console).
On the other hand, my testing qbt job in the new CI servers (with Yetus
0.12.0) is still working well, so I configured the new qbt job (
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/) to
send e-mails to the dev lists. I'll disable the old qbt job in a few days
if there are no problems.

Regards,
Akira

On Sun, Apr 19, 2020 at 5:05 PM Akira Ajisaka  wrote:

> Updated Precommit-(HADOOP|HDFS|YARN|MAPREDUCE)-Build jobs. The daily qbt
> jobs are kept as-is. Now I'm testing the new CI servers and testing the qbt
> jobs with Yetus 0.12.0.
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86 (actually
> the test runs on Java 8, I'll rename it)
>
> -Akira
>
> On Sun, Apr 19, 2020 at 3:41 AM Akira Ajisaka  wrote:
>
>> Hi folks,
>>
>> I updated Jenkinsfile to use Yetus 0.12.0 in GitHub PR.
>> https://issues.apache.org/jira/browse/HADOOP-16944
>>
>> In addition, I'm updating the configs in the Jenkins jobs to use Yetus
>> 0.12.0 in ASF JIRAs. I updated the settings of
>> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-HADOOP-Build
>> 
>>  and
>> testing in https://issues.apache.org/jira/browse/HADOOP-17000.
>>
>> If there is something wrong, please let me know and please feel free to
>> revert the config.
>>
>> Regards,
>> Akira
>>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-04-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1482/

[Apr 26, 2020 7:16:53 AM] (ayushsaxena) HADOOP-17007. hadoop-cos fails to 
build. Contributed by Yang Yu.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference of effectiveDirective in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo,
 EnumSet, boolean) Dereferenced at FSNamesystem.java:effectiveDirective in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo,
 EnumSet, boolean) Dereferenced at FSNamesystem.java:[line 7444] 
   Possible null pointer dereference of ret in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean) Dereferenced at FSNamesystem.java:ret in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean) Dereferenced at FSNamesystem.java:[line 3213] 
   Possible null pointer dereference of res in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean, Options$Rename[]) Dereferenced at FSNamesystem.java:res in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean, Options$Rename[]) Dereferenced at FSNamesystem.java:[line 3248] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 
   org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should 
be package protected At WebServiceClient.java: At WebServiceClient.java:[line 
42] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 

Failed junit tests :

   hadoop.metrics2.source.TestJvmMetrics 
   hadoop.io.compress.snappy.TestSnappyCompressorDecompressor 
   hadoop.security.token.delegation.TestZKDelegationTokenSecretManager 
   hadoop.io.compress.TestCompressorDecompressor 
   hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider 
   hadoop.hdfs.TestAclsEndToEnd 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.server.namenode.TestSaveNamespace 
   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup 
   hadoop.hdfs.server.namenode.TestRefreshBlockPlacementPolicy 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshot 
   hadoop.hdfs.TestSetrepIncreasing 
   hadoop.hdfs.TestByteBufferPread 
   hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.TestLeaseRecovery2 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.TestFSEditLogLoader 
   hadoop.hdfs.TestMaintenanceState 
   hadoop.hdfs.TestErasureCodingPolicyWithSnapshot 
   hadoop.hdfs.server.namenode.TestStorageRestore 
   hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens 
   hadoop.hdfs.server.namenode.TestXAttrConfigFlag 
   hadoop.hdfs.tools.TestECAdmin 
   hadoop.hdfs.TestFSOutputSummer 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.hdfs.TestQuotaAllowOwner 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-04-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/668/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
 AbstractDelegationTokenSecretManager$DelegationTokenInformation, boolean) At 
ZKDelegationTokenSecretManager.java:seqOs of method 
org.apache.hadoop.

[jira] [Created] (HADOOP-17017) S3A client retries on SSL Auth exceptions triggered by "." bucket names

2020-04-27 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17017:
---

 Summary: S3A client retries on SSL Auth exceptions triggered by 
"." bucket names
 Key: HADOOP-17017
 URL: https://issues.apache.org/jira/browse/HADOOP-17017
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.1
Reporter: Steve Loughran


If you have a "." in bucket names (it's allowed!) then virtual host HTTPS 
connections fail with a  java.net.ssl exception. Except we retry and the inner 
cause is wrapped by generic "client exceptions"

I'm not going to try and be clever about fixing this, but we should
* make sure that the inner exception is raised up
* avoid retries
* document it in the troubleshooting page. 
* if there is a well known public "." bucket (cloudera has some:)) we can test

I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17016) Adding Common Counters in ABFS

2020-04-27 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-17016:


 Summary: Adding Common Counters in ABFS
 Key: HADOOP-17016
 URL: https://issues.apache.org/jira/browse/HADOOP-17016
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Mehakmeet Singh
Assignee: Mehakmeet Singh


Common Counters to be added to ABFS:
|OP_CREATE|
|OP_OPEN|
|OP_GET_FILE_STATUS|
|OP_APPEND|
|OP_CREATE_NON_RECURSIVE|
|OP_DELETE|
|OP_EXISTS|
|OP_GET_DELEGATION_TOKEN|
|OP_LIST_STATUS|
|OP_MKDIRS|
|OP_RENAME|

|DIRECTORIES_CREATED|
|DIRECTORIES_DELETED|
|FILES_CREATED|
|FILES_DELETED|
|ERROR_IGNORED|

 propose:
 * Have an enum class to define all the counters.
 * Have an Instrumentation class for making a MetricRegistry and adding all the 
counters.
 * Incrementing the counters in AzureBlobFileSystem.
 * Integration and Unit tests to validate the counters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org