Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/

No changes




-1 overall


The following subsystems voted -1:
compile findbugs hadolint mvninstall mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/patch-mvninstall-root.txt
  [332K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [172K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [172K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/patch-compile-root-jdk1.7.0_95.txt
  [172K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/patch-compile-root-jdk1.8.0_222.txt
  [112K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/patch-compile-root-jdk1.8.0_222.txt
  [112K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/patch-compile-root-jdk1.8.0_222.txt
  [112K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out//testptch/patchprocess/maven-patch-checkstyle-root.txt
  []

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/patch-mvnsite-root.txt
  [64K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-common-project_hadoop-kms.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-common-project_hadoop-nfs.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-nfs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/453/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-common.txt
  [8.0K]
   

Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-22 Thread Vinayakumar B
Thanks Steve.

Idea is not to shade all artifacts.
Instead maintain one artifact ( hadoop-thirdparty) which have all such
dependencies ( com.google.* may be), add  this artifact as dependency in
hadoop modules. Use shaded classes directly in the code of hadoop modules
instead of shading at package phase.

Hbase, ozone and ratis already following this way. The artifact (
hadoop-thirdparty) with shaded dependencies can be maintained in a separate
repo as suggested by stack on HADOOP-13363 or could be maintained as a
separate module in Hadoop repo. If maintained in separate repo, need to
build this only when there are changes related to shaded dependencies.


-Vinay

On Sun, 22 Sep 2019, 10:11 pm Steve Loughran,  wrote:

>
>
> On Sun, Sep 22, 2019 at 3:22 PM Vinayakumar B 
> wrote:
>
>>Protobuf provides Wire compatibility between releases.. but not
>> guarantees the source compatibility in generated sources. There will be a
>> problem in compatibility if anyone uses generated protobuf message outside
>> of Hadoop modules. Which ideally shouldn't be as generated sources are not
>> public APIs.
>>
>>There should not be any compatibility problems between releases in
>> terms
>> of communication provided both uses same syntax (proto2) of proto message.
>> This I have verified by communication between protobuf 2.5.0 client with
>> protobuf 3.7.1 server.
>>
>>To avoid the downstream transitive dependency classpath problem, who
>> might be using protobuf 2.5.0 classes, planning to shade the 3.7.1 classes
>> and its usages in all hadoop modules.. and keep 2.5.0 jar back in hadoop
>> classpath.
>>
>> Hope I have answered your question.
>>
>> -Vinay
>>
>>
> While I support the move and CP isolation, this is going to (finally)
> force us to make shaded versions of all artifacts which we publish with the
> intent of them being loaded on the classpath of other applications
>


[jira] [Resolved] (HADOOP-16592) Build fails as can't retrieve websocket-server-impl

2019-09-22 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16592.
-
Resolution: Works for Me

> Build fails as can't retrieve websocket-server-impl
> ---
>
> Key: HADOOP-16592
> URL: https://issues.apache.org/jira/browse/HADOOP-16592
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Erkin Alp Güney
>Priority: Blocker
>
> [ERROR] Failed to execute goal on project hadoop-yarn-server-nodemanager: 
> Could not resolve dependencies for project 
> org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT: The 
> following artifacts could not be resolved: 
> org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.27.v20190418, 
> org.eclipse.jetty:jetty-annotations:jar:9.3.27.v20190418, 
> org.eclipse.jetty:jetty-plus:jar:9.3.27.v20190418, 
> org.eclipse.jetty:jetty-jndi:jar:9.3.27.v20190418, 
> org.eclipse.jetty.websocket:javax-websocket-client-impl:jar:9.3.27.v20190418, 
> org.eclipse.jetty.websocket:websocket-client:jar:9.3.27.v20190418, 
> org.eclipse.jetty.websocket:websocket-server:jar:9.3.27.v20190418, 
> org.eclipse.jetty.websocket:websocket-common:jar:9.3.27.v20190418, 
> org.eclipse.jetty.websocket:websocket-api:jar:9.3.27.v20190418, 
> org.eclipse.jetty.websocket:websocket-servlet:jar:9.3.27.v20190418: Could not 
> transfer artifact 
> org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.27.v20190418 
> from/to apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots): 
> repository.apache.org: Unknown host repository.apache.org -> [Help 1]
> Again, the same as HADOOP-16577, but this time with websocket-server-impl.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-22 Thread Steve Loughran
On Sun, Sep 22, 2019 at 3:22 PM Vinayakumar B 
wrote:

>Protobuf provides Wire compatibility between releases.. but not
> guarantees the source compatibility in generated sources. There will be a
> problem in compatibility if anyone uses generated protobuf message outside
> of Hadoop modules. Which ideally shouldn't be as generated sources are not
> public APIs.
>
>There should not be any compatibility problems between releases in terms
> of communication provided both uses same syntax (proto2) of proto message.
> This I have verified by communication between protobuf 2.5.0 client with
> protobuf 3.7.1 server.
>
>To avoid the downstream transitive dependency classpath problem, who
> might be using protobuf 2.5.0 classes, planning to shade the 3.7.1 classes
> and its usages in all hadoop modules.. and keep 2.5.0 jar back in hadoop
> classpath.
>
> Hope I have answered your question.
>
> -Vinay
>
>
While I support the move and CP isolation, this is going to (finally) force
us to make shaded versions of all artifacts which we publish with the
intent of them being loaded on the classpath of other applications


Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-22 Thread Vinayakumar B
   Protobuf provides Wire compatibility between releases.. but not
guarantees the source compatibility in generated sources. There will be a
problem in compatibility if anyone uses generated protobuf message outside
of Hadoop modules. Which ideally shouldn't be as generated sources are not
public APIs.

   There should not be any compatibility problems between releases in terms
of communication provided both uses same syntax (proto2) of proto message.
This I have verified by communication between protobuf 2.5.0 client with
protobuf 3.7.1 server.

   To avoid the downstream transitive dependency classpath problem, who
might be using protobuf 2.5.0 classes, planning to shade the 3.7.1 classes
and its usages in all hadoop modules.. and keep 2.5.0 jar back in hadoop
classpath.

Hope I have answered your question.

-Vinay

On Sun, 22 Sep 2019, 7:38 pm Vinod Kumar Vavilapalli, 
wrote:

> Quick question, being lazy here, lots of JIRA updates on HADOOP-13363 over
> the years not helping either.
>
> Does anyone know what this upgrade will mean w.r.t compatibility for the
> Hadoop releases themselves? Remember that trunk is still 3.x.
>
> Thanks
> +Vinod
>
> > On Sep 21, 2019, at 9:55 AM, Vinayakumar B 
> wrote:
> >
> > @Wangda Tan  ,
> > Sorry for the confusion. HADOOP-13363 is umbrella jira to track multiple
> > stages of protobuf upgrade in subtasks. (jar upgrade, Docker update,
> plugin
> > upgrade, shading, etc).
> > Right now, first task of jar upgrade is done. So need to update the
> protoc
> > executable in the in build environments.
> >
> > @张铎(Duo Zhang)  ,
> > Sorry for the inconvenience. Yes, indeed plugin update before jar upgrde
> > was possible. Sorry I missed it.
> >
> > Plugin update needed to be done for whole project, for which precommit
> > jenkins will need more time complete end-to-end runs.
> > So plugin update is planned in stages in further subtasks. It could be
> done
> > in 2-3 days.
> >
> > -Vinay
> >
> > On Sat, 21 Sep 2019, 5:55 am 张铎(Duo Zhang), 
> wrote:
> >
> >> I think this one is alread in place so we have to upgrade...
> >>
> >> https://issues.apache.org/jira/browse/HADOOP-16557
> >>
> >> Wangda Tan  于2019年9月21日周六 上午7:19写道:
> >>
> >>> Hi Vinay,
> >>>
> >>> A bit confused, I saw the HADOOP-13363 is still pending. Do we need to
> >>> upgrade protobuf version to 3.7.1 NOW or once HADOOP-13363 is
> completed?
> >>>
> >>> Thanks,
> >>> Wangda
> >>>
> >>> On Fri, Sep 20, 2019 at 8:11 AM Vinayakumar B  >
> >>> wrote:
> >>>
>  Hi All,
> 
>  A very long pending task, protobuf upgrade is happening in
> >> HADOOP-13363.
> >>> As
>  part of that protobuf version is upgraded to 3.7.1.
> 
>  Please update your build environments to have 3.7.1 protobuf version.
> 
>  BUILIDING.txt has been updated with latest instructions.
> 
>  This pre-requisite to update protoc dependecy manually is required
> >> until
>  'hadoop-maven-plugin' is replaced with 'protobuf-mavem-plugin' to
>  dynamically resolve required protoc exe.
> 
>  Dockerfile is being updated to have latest 3.7.1 as default protoc for
> >>> test
>  environments.
> 
>  Thanks,
>  -Vinay
> 
> >>>
> >>
>
>


Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-22 Thread Vinod Kumar Vavilapalli
Quick question, being lazy here, lots of JIRA updates on HADOOP-13363 over the 
years not helping either.

Does anyone know what this upgrade will mean w.r.t compatibility for the Hadoop 
releases themselves? Remember that trunk is still 3.x.

Thanks
+Vinod

> On Sep 21, 2019, at 9:55 AM, Vinayakumar B  wrote:
> 
> @Wangda Tan  ,
> Sorry for the confusion. HADOOP-13363 is umbrella jira to track multiple
> stages of protobuf upgrade in subtasks. (jar upgrade, Docker update, plugin
> upgrade, shading, etc).
> Right now, first task of jar upgrade is done. So need to update the protoc
> executable in the in build environments.
> 
> @张铎(Duo Zhang)  ,
> Sorry for the inconvenience. Yes, indeed plugin update before jar upgrde
> was possible. Sorry I missed it.
> 
> Plugin update needed to be done for whole project, for which precommit
> jenkins will need more time complete end-to-end runs.
> So plugin update is planned in stages in further subtasks. It could be done
> in 2-3 days.
> 
> -Vinay
> 
> On Sat, 21 Sep 2019, 5:55 am 张铎(Duo Zhang),  wrote:
> 
>> I think this one is alread in place so we have to upgrade...
>> 
>> https://issues.apache.org/jira/browse/HADOOP-16557
>> 
>> Wangda Tan  于2019年9月21日周六 上午7:19写道:
>> 
>>> Hi Vinay,
>>> 
>>> A bit confused, I saw the HADOOP-13363 is still pending. Do we need to
>>> upgrade protobuf version to 3.7.1 NOW or once HADOOP-13363 is completed?
>>> 
>>> Thanks,
>>> Wangda
>>> 
>>> On Fri, Sep 20, 2019 at 8:11 AM Vinayakumar B 
>>> wrote:
>>> 
 Hi All,
 
 A very long pending task, protobuf upgrade is happening in
>> HADOOP-13363.
>>> As
 part of that protobuf version is upgraded to 3.7.1.
 
 Please update your build environments to have 3.7.1 protobuf version.
 
 BUILIDING.txt has been updated with latest instructions.
 
 This pre-requisite to update protoc dependecy manually is required
>> until
 'hadoop-maven-plugin' is replaced with 'protobuf-mavem-plugin' to
 dynamically resolve required protoc exe.
 
 Dockerfile is being updated to have latest 3.7.1 as default protoc for
>>> test
 environments.
 
 Thanks,
 -Vinay
 
>>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-22 Thread Vinod Kumar Vavilapalli
Looks to me that the advantages of this additional step are only incremental 
given that you've already decoupled releases and dependencies.

Do you see a Submarine like split-also-into-a-TLP for Ozone? If not now, 
sometime further down the line? If so, why not do both at the same time? I felt 
the same way with Submarine, but couldn't follow up in time.

Thanks
+Vinod

> On Sep 18, 2019, at 4:04 AM, Wangda Tan  wrote:
> 
> +1 (binding).
> 
> From my experiences of Submarine project, I think moving to a separate repo
> helps.
> 
> - Wangda
> 
> On Tue, Sep 17, 2019 at 11:41 AM Subru Krishnan  wrote:
> 
>> +1 (binding).
>> 
>> IIUC, there will not be an Ozone module in trunk anymore as that was my
>> only concern from the original discussion thread? IMHO, this should be the
>> default approach for new modules.
>> 
>> On Tue, Sep 17, 2019 at 9:58 AM Salvatore LaMendola (BLOOMBERG/ 731 LEX) <
>> slamendo...@bloomberg.net> wrote:
>> 
>>> +1
>>> 
>>> From: e...@apache.org At: 09/17/19 05:48:32To:
>> hdfs-...@hadoop.apache.org,
>>> mapreduce-...@hadoop.apache.org,  common-dev@hadoop.apache.org,
>>> yarn-...@hadoop.apache.org
>>> Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk
>>> source tree
>>> 
>>> 
>>> TLDR; I propose to move Ozone related code out from Hadoop trunk and
>>> store it in a separated *Hadoop* git repository apache/hadoop-ozone.git
>>> 
>>> 
>>> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
>>> be part of the source tree but with separated release cadence, mainly
>>> because it had the hadoop-trunk/SNAPSHOT as compile time dependency.
>>> 
>>> During the last Ozone releases this dependency is removed to provide
>>> more stable releases. Instead of using the latest trunk/SNAPSHOT build
>>> from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
>>> 
>>> As we have no more strict dependency between Hadoop trunk SNAPSHOT and
>>> Ozone trunk I propose to separate the two code base from each other with
>>> creating a new Hadoop git repository (apache/hadoop-ozone.git):
>>> 
>>> With moving Ozone to a separated git repository:
>>> 
>>>  * It would be easier to contribute and understand the build (as of now
>>> we always need `-f pom.ozone.xml` as a Maven parameter)
>>>  * It would be possible to adjust build process without breaking
>>> Hadoop/Ozone builds.
>>>  * It would be possible to use different Readme/.asf.yaml/github
>>> template for the Hadoop Ozone and core Hadoop. (For example the current
>>> github template [2] has a link to the contribution guideline [3]. Ozone
>>> has an extended version [4] from this guideline with additional
>>> information.)
>>>  * Testing would be more safe as it won't be possible to change core
>>> Hadoop and Hadoop Ozone in the same patch.
>>>  * It would be easier to cut branches for Hadoop releases (based on the
>>> original consensus, Ozone should be removed from all the release
>>> branches after creating relase branches from trunk)
>>> 
>>> 
>>> What do you think?
>>> 
>>> Thanks,
>>> Marton
>>> 
>>> [1]:
>>> 
>>> 
>> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8
>>> c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
>>> [2]:
>>> 
>>> 
>> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
>>> [3]:
>> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
>>> [4]:
>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
>>> 
>>> -
>>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>> 
>>> 
>>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-22 Thread Rohith Sharma K S
Inline comments

On Thu, 19 Sep 2019 at 11:51, Rohith Sharma K S 
wrote:

> Thanks Brahma for voting and bringing this to my attention!
>
> On Thu, 19 Sep 2019 at 11:28, Brahma Reddy Battula 
> wrote:
>
>> RohithThanks for driving the release
>>
>> +1 (Binding).
>>
>> --Built from the source
>> --Installed pseudo cluster
>> --Verified Basic hdfs shell command
>> --Ran Pi jobs
>> --Browsed the UI
>>
>>
>> *Rolling Upgrade:*
>> Following issue could have been merged.With out this, need to disable
>> token till rolling upgrade finalised. (Since one of main rolling upgrade
>> issue already merged (HDFS-13596)).
>> https://issues.apache.org/jira/browse/HDFS-14509
>>
> This issue marked as blocker for 2.10 and still open!. Can anyone HDFS
> folks confirms this whether is is blocker for *hadoop-3.2.1* release?
>
 IMO, it is not a blocker for 3.2.1 release. And I haven't heard any
feedback on this from HDFS folks. Hence I am moving forward with release
3.2.1 activities. I will be closing the voting thread today


Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-22 Thread Rohith Sharma K S
Thanks all who helped to verify and vote 3.2.1 release! I am concluding the
vote for 3.2.1 RC0.

Summary of votes for hadoop-3.2.1-RC0:

7 binding +1s, from:
--
Sunil Govindan, Brahma Reddy Battula, Steve Loughran, Elek, Marton, Weiwei
Yang, Naganarasimha Garla, Rohith Sharma K S


10 non-binding +1s, from:
---
runlin zhang, Thomas Marquardt, Santosh Marella, Anil Sadineni, Jeffrey
Rodriguez, zhankun tang, Ayush Saxena, Dinesh Chitlangia, Prabhu Josep,
Abhishek Modi

1 non-binding with +0s from:
-
Masatake Iwasaki

and *no -1s*.


So I am glad to announce that the vote for 3.2.1 RC0 passes.


Thanks everyone listed above who tried the release candidate and vote, and
all who ever help with 3.2.1 release effort in all kinds of ways.

I'll push the release bits and send out an announcement for 3.2.1 soon.

Thanks,
Rohith Sharma K S





On Thu, 19 Sep 2019 at 14:34, Abhishek Modi  wrote:

> Hi Rohith,
>
> Thanks for driving this release.
>
> +1 (binding)
>
> - built from the source on windows machine.
> - created a pseudo cluster.
> - ran PI job.
> - checked basic metrics with ATSv2 enabled.
>
> On Thu, Sep 19, 2019 at 12:30 PM Sunil Govindan  wrote:
>
>> Hi Rohith
>>
>> Thanks for putting this together, appreciate the same.
>>
>> +1 (binding)
>>
>> - verified signature
>> - brought up a cluster from the tar ball
>> - Ran some basic MR jobs
>> - RM UI seems fine (old and new)
>>
>>
>> Thanks
>> Sunil
>>
>> On Wed, Sep 11, 2019 at 12:56 PM Rohith Sharma K S <
>> rohithsharm...@apache.org> wrote:
>>
>> > Hi folks,
>> >
>> > I have put together a release candidate (RC0) for Apache Hadoop 3.2.1.
>> >
>> > The RC is available at:
>> > http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/
>> >
>> > The RC tag in git is release-3.2.1-RC0:
>> > https://github.com/apache/hadoop/tree/release-3.2.1-RC0
>> >
>> >
>> > The maven artifacts are staged at
>> >
>> https://repository.apache.org/content/repositories/orgapachehadoop-1226/
>> >
>> > You can find my public key at:
>> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>> >
>> > This vote will run for 7 days(5 weekdays), ending on 18th Sept at 11:59
>> pm
>> > PST.
>> >
>> > I have done testing with a pseudo cluster and distributed shell job. My
>> +1
>> > to start.
>> >
>> > Thanks & Regards
>> > Rohith Sharma K S
>> >
>>
>
>
> --
> Regards,
> Abhishek Modi
>