Re: VOTE: Hadoop Ozone 0.4.0-alpha RC2

2019-05-04 Thread Elek, Marton
+1 (non-binding).

Thanks the continuous effort Ajay. This is the best Ozone release
package what I have ever seen:

I checked the following:

 * Signatures are checked: OK
 * sha512 checksums are checked: OK
 * can be built from the source: OK
 * smoketest executed from the bin package (after build): OK
 * smoketest executed from the src package: OK
 * 'ozone version' shows the right version info: OK
 * docs are included: OK
 * docs are visible from the web ui: OK
 * Using latest release ratis: OK

Marton

On 4/30/19 6:04 AM, Ajay Kumar wrote:
> Hi All,
> 
> 
> 
> We have created the third release candidate (RC2) for Apache Hadoop Ozone 
> 0.4.0-alpha.
> 
> 
> 
> This release contains security payload for Ozone. Below are some important 
> features in it:
> 
> 
> 
>   *   Hadoop Delegation Tokens and Block Tokens supported for Ozone.
>   *   Transparent Data Encryption (TDE) Support - Allows data blocks to be 
> encrypted-at-rest.
>   *   Kerberos support for Ozone.
>   *   Certificate Infrastructure for Ozone  - Tokens use PKI instead of 
> shared secrets.
>   *   Datanode to Datanode communication secured via mutual TLS.
>   *   Ability secure ozone cluster that works with Yarn, Hive, and Spark.
>   *   Skaffold support to deploy Ozone clusters on K8s.
>   *   Support S3 Authentication Mechanisms like - S3 v4 Authentication 
> protocol.
>   *   S3 Gateway supports Multipart upload.
>   *   S3A file system is tested and supported.
>   *   Support for Tracing and Profiling for all Ozone components.
>   *   Audit Support - including Audit Parser tools.
>   *   Apache Ranger Support in Ozone.
>   *   Extensive failure testing for Ozone.
> 
> The RC artifacts are available at 
> https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc2/
> 
> 
> 
> The RC tag in git is ozone-0.4.0-alpha-RC2 (git hash 
> 4ea602c1ee7b5e1a5560c6cbd096de4b140f776b)
> 
> 
> 
> Please try 
> out,
>  vote, or just give us feedback.
> 
> 
> 
> The vote will run for 5 days, ending on May 4, 2019, 04:00 UTC.
> 
> 
> 
> Thank you very much,
> 
> Ajay
> 

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-05-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1126/

[May 3, 2019 5:52:14 AM] (vinayakumarb) HADOOP-16059. Use SASL Factories Cache 
to Improve Performance.
[May 3, 2019 5:14:17 PM] (cliang) HADOOP-16292. Refactor checkTrustAndSend in 
SaslDataTransferClient to
[May 3, 2019 6:49:00 PM] (nanda) HDDS-1448 : RatisPipelineProvider should only 
consider open pipeline
[May 3, 2019 10:05:17 PM] (gifuma) YARN-9528. Federation RMs starting up at the 
same time can give




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 
   hadoop.hdds.scm.container.TestContainerStateManagerIntegration 
   hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules 
   hadoop.ozone.om.TestMultipleContainerReadWrite 
   hadoop.ozone.client.rpc.TestContainerStateMachineFailures 
   hadoop.ozone.web.client.TestBuckets 
   hadoop.ozone.web.client.TestVolume 
   hadoop.ozone.TestStorageContainerManager 
   hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode 
   hadoop.ozone.scm.TestXceiverClientManager 
   hadoop.ozone.ozShell.TestOzoneDatanodeShell 
   
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 
   hadoop.ozone.scm.node.TestSCMNodeMetrics 
   hadoop.ozone.client.rpc.TestBlockOutputStream 
   hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures 
   hadoop.ozone.om.TestOzoneManager 
   hadoop.ozone.web.client.TestKeys 
   
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 
   hadoop.ozone.ozShell.TestOzoneShell 
   
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 
   hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis 
   hadoop.ozone.client.rpc.TestContainerStateMachine 
   hadoop.ozone.om.TestContainerReportWithKeys 
   hadoop.ozone.container.TestContainerReplication 
   hadoop.ozone.client.rpc.TestReadRetries 
   hadoop.hdds.scm.pipeline.TestSCMRestart 
   hadoop.hdds.scm.pipeline.TestRatisPipelineProvider 
   hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException 
   hadoop.ozone.om.TestScmSafeMode 
   hadoop.ozone.om.TestOMDbCheckpointServlet 
   hadoop.ozone.om.TestOzoneManagerHA 
   
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion 
   hadoop.ozone.om.TestOmInit 
   hadoop.ozone.container.ozoneimpl.TestOzoneContainer 
   hadoop.ozone.om.TestOmBlockVersioning 
   hadoop.ozone.scm.TestAllocateContainer 
   hadoop.hdds.scm.pipeline.TestPipelineClose 
   hadoop.hdds.scm.pipeline.TestNodeFailure 
   hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus 
   hadoop.fs.ozone.contract.ITestOzoneContractRename 
   hadoop.fs.ozone.contract.ITestOzoneContractRootDir 
   hadoop.fs.ozone.contract.ITestOzoneContractMkdir 
   hadoop.fs.ozone.contract.ITestOzoneContractSeek 
   hadoop.fs.ozone.contract.ITestOz

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-05-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/

[May 3, 2019 4:17:43 PM] (ericp) YARN-9285: RM UI progress column is of wrong 
type. Contributed by  Ahmed




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   hadoop.hdfs.server.namenode.TestNameNodeHttpServerXFrame 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.sls.TestReservationSystemInvariants 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/311/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]