Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Zhenyu Zheng
+1 (non-binding)

- Verified all hashes and checksums
- Tested on ARM platform for the following actions:
  + Built from source on Ubuntu 18.04, OpenJDK 8
  + Deployed a pseudo cluster
  + Ran some example jobs(grep, wordcount, pi)
  + Ran teragen/terasort/teravalidate
  + Ran TestDFSIO job

BR,

Zhenyu

On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka  wrote:

> +1 (binding)
>
> - Verified checksums and signatures.
> - Built from the source with CentOS 7 and OpenJDK 8.
> - Successfully upgraded HDFS to 3.3.0-RC0 in our development cluster (with
> RBF, security, and OpenJDK 11) for end-users. No issues reported.
> - The document looks good.
> - Deployed pseudo cluster and ran some MapReduce jobs.
>
> Thanks,
> Akira
>
>
> On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula 
> wrote:
>
> > Hi folks,
> >
> > This is the first release candidate for the first release of Apache
> > Hadoop 3.3.0
> > line.
> >
> > It contains *1644[1]* fixed jira issues since 3.2.1 which include a lot
> of
> > features and improvements(read the full set of release notes).
> >
> > Below feature additions are the highlights of the release.
> >
> > - ARM Support
> > - Enhancements and new features on S3a,S3Guard,ABFS
> > - Java 11 Runtime support and TLS 1.3.
> > - Support Tencent Cloud COS File System.
> > - Added security to HDFS Router.
> > - Support non-volatile storage class memory(SCM) in HDFS cache directives
> > - Support Interactive Docker Shell for running Containers.
> > - Scheduling of opportunistic containers
> > - A pluggable device plugin framework to ease vendor plugin development
> >
> > *The RC0 artifacts are at*:
> > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> >
> > *First release to include ARM binary, Have a check.*
> > *RC tag is *release-3.3.0-RC0.
> >
> >
> > *The maven artifacts are hosted here:*
> > https://repository.apache.org/content/repositories/orgapachehadoop-1271/
> >
> > *My public key is available here:*
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM IST.
> >
> >
> > I have done a few testing with my pseudo cluster. My +1 to start.
> >
> >
> >
> > Regards,
> > Brahma Reddy Battula
> >
> >
> > 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.3.0)
> AND
> > fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER BY
> > fixVersion ASC
> >
>


Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Vinayakumar B
+1 (Binding)

-Verified all checksums and Signatures.
-Verified site, Release notes and Change logs
  + May be changelog and release notes could be grouped based on the
project at second level for better look (this needs to be supported from
yetus)
-Tested in x86 local 3-node docker cluster.
  + Built from source with OpenJdk 8 and Ubuntu 18.04
  + Deployed 3 node docker cluster
  + Ran various Jobs (wordcount, Terasort, Pi, etc)

No Issues reported.

-Vinay

On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu  wrote:

> +1 (non-binding)
>
> - checkout the "3.3.0-aarch64-RC0" binaries packages
>
> - started a clusters with 3 nodes VMs of Ubuntu 18.04 ARM/aarch64,
> openjdk-11-jdk
>
> - checked some web UIs (NN, DN, RM, NM)
>
> - Executed a wordcount, TeraGen, TeraSort and TeraValidate
>
> - Executed a TestDFSIO job
>
> - Executed a Pi job
>
> BR,
> Liusheng
>
> Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
>
> > +1 (non-binding)
> >
> > - Verified all hashes and checksums
> > - Tested on ARM platform for the following actions:
> >   + Built from source on Ubuntu 18.04, OpenJDK 8
> >   + Deployed a pseudo cluster
> >   + Ran some example jobs(grep, wordcount, pi)
> >   + Ran teragen/terasort/teravalidate
> >   + Ran TestDFSIO job
> >
> > BR,
> >
> > Zhenyu
> >
> > On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka 
> wrote:
> >
> > > +1 (binding)
> > >
> > > - Verified checksums and signatures.
> > > - Built from the source with CentOS 7 and OpenJDK 8.
> > > - Successfully upgraded HDFS to 3.3.0-RC0 in our development cluster
> > (with
> > > RBF, security, and OpenJDK 11) for end-users. No issues reported.
> > > - The document looks good.
> > > - Deployed pseudo cluster and ran some MapReduce jobs.
> > >
> > > Thanks,
> > > Akira
> > >
> > >
> > > On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula  >
> > > wrote:
> > >
> > > > Hi folks,
> > > >
> > > > This is the first release candidate for the first release of Apache
> > > > Hadoop 3.3.0
> > > > line.
> > > >
> > > > It contains *1644[1]* fixed jira issues since 3.2.1 which include a
> lot
> > > of
> > > > features and improvements(read the full set of release notes).
> > > >
> > > > Below feature additions are the highlights of the release.
> > > >
> > > > - ARM Support
> > > > - Enhancements and new features on S3a,S3Guard,ABFS
> > > > - Java 11 Runtime support and TLS 1.3.
> > > > - Support Tencent Cloud COS File System.
> > > > - Added security to HDFS Router.
> > > > - Support non-volatile storage class memory(SCM) in HDFS cache
> > directives
> > > > - Support Interactive Docker Shell for running Containers.
> > > > - Scheduling of opportunistic containers
> > > > - A pluggable device plugin framework to ease vendor plugin
> development
> > > >
> > > > *The RC0 artifacts are at*:
> > > > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> > > >
> > > > *First release to include ARM binary, Have a check.*
> > > > *RC tag is *release-3.3.0-RC0.
> > > >
> > > >
> > > > *The maven artifacts are hosted here:*
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1271/
> > > >
> > > > *My public key is available here:*
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > > The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM
> > IST.
> > > >
> > > >
> > > > I have done a few testing with my pseudo cluster. My +1 to start.
> > > >
> > > >
> > > >
> > > > Regards,
> > > > Brahma Reddy Battula
> > > >
> > > >
> > > > 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> (3.3.0)
> > > AND
> > > > fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER
> BY
> > > > fixVersion ASC
> > > >
> > >
> >
>


Re: [VOTE] Release Apache Hadoop 3.1.4 (RC2)

2020-07-10 Thread Gabor Bota
Yes, sure. I'll do another RC for next week.

Thank you all for working on this!

On Thu, Jul 9, 2020 at 8:20 AM Masatake Iwasaki
 wrote:
>
> Hi Gabor Bota,
>
> I committed the fix of YARN-10347 to branch-3.1.
> I think this should be blocker for 3.1.4.
> Could you cherry-pick it to branch-3.1.4 and cut a new RC?
>
> Thanks,
> Masatake Iwasaki
>
> On 2020/07/08 23:31, Masatake Iwasaki wrote:
> > Thanks Steve and Prabhu for the information.
> >
> > The cause turned out to be locking in CapacityScheduler#reinitialize.
> > I think the method is called after transitioning to active stat if
> > RM-HA is enabled.
> >
> > I filed YARN-10347 and created PR.
> >
> >
> > Masatake Iwasaki
> >
> >
> > On 2020/07/08 16:33, Prabhu Joseph wrote:
> >> Hi Masatake,
> >>
> >>   The thread is waiting for a ReadLock, we need to check what the
> >> other
> >> thread holding WriteLock is blocked on.
> >> Can you get three consecutive complete jstack of ResourceManager
> >> during the
> >> issue.
> >>
>  I got no issue if RM-HA is disabled.
> >> Looks RM is not able to access Zookeeper State Store. Can you check if
> >> there is any connectivity issue between RM and Zookeeper.
> >>
> >> Thanks,
> >> Prabhu Joseph
> >>
> >>
> >> On Mon, Jul 6, 2020 at 2:44 AM Masatake Iwasaki
> >> 
> >> wrote:
> >>
> >>> Thanks for putting this up, Gabor Bota.
> >>>
> >>> I'm testing the RC2 on 3 node docker cluster with NN-HA and RM-HA
> >>> enabled.
> >>> ResourceManager reproducibly blocks on submitApplication while
> >>> launching
> >>> example MR jobs.
> >>> Does anyone run into the same issue?
> >>>
> >>> The same configuration worked for 3.1.3.
> >>> I got no issue if RM-HA is disabled.
> >>>
> >>>
> >>> "IPC Server handler 1 on default port 8032" #167 daemon prio=5
> >>> os_prio=0
> >>> tid=0x7fe91821ec50 nid=0x3b9 waiting on condition
> >>> [0x7fe901bac000]
> >>>  java.lang.Thread.State: WAITING (parking)
> >>>   at sun.misc.Unsafe.park(Native Method)
> >>>   - parking to wait for  <0x85d37a40> (a
> >>> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> >>>   at
> >>> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> >>>   at
> >>>
> >>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> >>>
> >>>   at
> >>>
> >>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
> >>>
> >>>   at
> >>>
> >>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
> >>>
> >>>   at
> >>>
> >>> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.checkAndGetApplicationPriority(CapacityScheduler.java:2521)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:417)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:342)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:678)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:277)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:563)
> >>>
> >>>   at
> >>>
> >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
> >>>
> >>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
> >>>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
> >>>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
> >>>   at java.security.AccessController.doPrivileged(Native Method)
> >>>   at javax.security.auth.Subject.doAs(Subject.java:422)
> >>>   at
> >>>
> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> >>>
> >>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
> >>>
> >>>
> >>> Masatake Iwasaki
> >>>
> >>> On 2020/06/26 22:51, Gabor Bota wrote:
>  Hi folks,
> 
>  I have put together a release candidate (RC2) for Hadoop 3.1.4.
> 
>  The RC is available at:
> >>> http://people.apache.org/~gabota/hadoop-3.1.4-RC2/
>  The RC tag in git is here:
>  https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC2
>  The maven artifacts are staged at
>  https://repository.apache.org/content/repositories/orgapachehadoop-1269/
> 
> 
>  You can find my 

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/743/

[Jul 9, 2020 3:43:22 AM] (iwasakims) HADOOP-17120. Fix failure of docker image 
creation due to pip2 install




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

findbugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

findbugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
 AbstractDelegationTokenSecretManager$Delegat

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-10 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/199/

[Jul 9, 2020 4:59:47 AM] (noreply) YARN-10344. Sync netty versions in 
hadoop-yarn-csi. (#2126)
[Jul 9, 2020 7:04:52 AM] (Brahma Reddy Battula) YARN-10341. Yarn Service 
Container Completed event doesn't get processed. Contributed by Bilwa S T.
[Jul 9, 2020 7:20:25 AM] (Sunil G) YARN-10333. YarnClient obtain Delegation 
Token for Log Aggregation Path. Contributed by Prabhu Joseph.
[Jul 9, 2020 6:33:37 PM] (noreply) HADOOP-17079. Optimize UGI#getGroups by 
adding UGI#getGroupsSet. (#2085)
[Jul 9, 2020 7:38:52 PM] (noreply) HDFS-15462. Add 
fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml (#2131)




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   

Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-10 Thread Iñigo Goiri
+1 (Binding)

Deployed a cluster on Azure VMs with:
* 3 VMs with HDFS Namenodes and Routers
* 2 VMs with YARN Resource Managers
* 5 VMs with HDFS Datanodes and Node Managers

Tests:
* Executed Tergagen+Terasort+Teravalidate.
* Executed wordcount.
* Browsed through the Web UI.



On Fri, Jul 10, 2020 at 1:06 AM Vinayakumar B 
wrote:

> +1 (Binding)
>
> -Verified all checksums and Signatures.
> -Verified site, Release notes and Change logs
>   + May be changelog and release notes could be grouped based on the
> project at second level for better look (this needs to be supported from
> yetus)
> -Tested in x86 local 3-node docker cluster.
>   + Built from source with OpenJdk 8 and Ubuntu 18.04
>   + Deployed 3 node docker cluster
>   + Ran various Jobs (wordcount, Terasort, Pi, etc)
>
> No Issues reported.
>
> -Vinay
>
> On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu  wrote:
>
> > +1 (non-binding)
> >
> > - checkout the "3.3.0-aarch64-RC0" binaries packages
> >
> > - started a clusters with 3 nodes VMs of Ubuntu 18.04 ARM/aarch64,
> > openjdk-11-jdk
> >
> > - checked some web UIs (NN, DN, RM, NM)
> >
> > - Executed a wordcount, TeraGen, TeraSort and TeraValidate
> >
> > - Executed a TestDFSIO job
> >
> > - Executed a Pi job
> >
> > BR,
> > Liusheng
> >
> > Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
> >
> > > +1 (non-binding)
> > >
> > > - Verified all hashes and checksums
> > > - Tested on ARM platform for the following actions:
> > >   + Built from source on Ubuntu 18.04, OpenJDK 8
> > >   + Deployed a pseudo cluster
> > >   + Ran some example jobs(grep, wordcount, pi)
> > >   + Ran teragen/terasort/teravalidate
> > >   + Ran TestDFSIO job
> > >
> > > BR,
> > >
> > > Zhenyu
> > >
> > > On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka 
> > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > - Verified checksums and signatures.
> > > > - Built from the source with CentOS 7 and OpenJDK 8.
> > > > - Successfully upgraded HDFS to 3.3.0-RC0 in our development cluster
> > > (with
> > > > RBF, security, and OpenJDK 11) for end-users. No issues reported.
> > > > - The document looks good.
> > > > - Deployed pseudo cluster and ran some MapReduce jobs.
> > > >
> > > > Thanks,
> > > > Akira
> > > >
> > > >
> > > > On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula <
> bra...@apache.org
> > >
> > > > wrote:
> > > >
> > > > > Hi folks,
> > > > >
> > > > > This is the first release candidate for the first release of Apache
> > > > > Hadoop 3.3.0
> > > > > line.
> > > > >
> > > > > It contains *1644[1]* fixed jira issues since 3.2.1 which include a
> > lot
> > > > of
> > > > > features and improvements(read the full set of release notes).
> > > > >
> > > > > Below feature additions are the highlights of the release.
> > > > >
> > > > > - ARM Support
> > > > > - Enhancements and new features on S3a,S3Guard,ABFS
> > > > > - Java 11 Runtime support and TLS 1.3.
> > > > > - Support Tencent Cloud COS File System.
> > > > > - Added security to HDFS Router.
> > > > > - Support non-volatile storage class memory(SCM) in HDFS cache
> > > directives
> > > > > - Support Interactive Docker Shell for running Containers.
> > > > > - Scheduling of opportunistic containers
> > > > > - A pluggable device plugin framework to ease vendor plugin
> > development
> > > > >
> > > > > *The RC0 artifacts are at*:
> > > > > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> > > > >
> > > > > *First release to include ARM binary, Have a check.*
> > > > > *RC tag is *release-3.3.0-RC0.
> > > > >
> > > > >
> > > > > *The maven artifacts are hosted here:*
> > > > >
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1271/
> > > > >
> > > > > *My public key is available here:*
> > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > > >
> > > > > The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM
> > > IST.
> > > > >
> > > > >
> > > > > I have done a few testing with my pseudo cluster. My +1 to start.
> > > > >
> > > > >
> > > > >
> > > > > Regards,
> > > > > Brahma Reddy Battula
> > > > >
> > > > >
> > > > > 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> > (3.3.0)
> > > > AND
> > > > > fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER
> > BY
> > > > > fixVersion ASC
> > > > >
> > > >
> > >
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-10 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/200/

No changes


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org