Re: [E] [NOTICE] Attaching patches in JIRA issue no longer works

2022-03-31 Thread Eric Badger
I think this deserves some attention. More than just the question of JIRA
vs GitHub Issues, I'm a little concerned that we completely changed the way
we post code changes without a vote thread or even a discussion thread that
had a clear outcome. The previous thread ([DISCUSS] Tips for improving
productivity, workflow in the Hadoop project?) had many committers giving
opinions on the matter, but it never came to conclusion and just sat there
with no traffic for months. The way I read the previous thread was that
committers were proposing that we clean out stale PRs, not that we turn
off JIRA patches/Precommit builds.

I'm not necessarily saying that we should go with patches vs GitHub PRs,
but I'm concerned that the decision was made without community
support/consensus and without a vote thread (not sure if that's necessary
for this type of change or not).

Eric

On Mon, Mar 28, 2022 at 1:18 PM Eric Badger  wrote:

> If we're not using patches on JIRA anymore, why are we using JIRA at all?
> Why don't we just use GitHub Issues? Using JIRA to then redirect to GitHub
> seems unintuitive and will fracture the information between two different
> places. Do the conversations happen on JIRA or on a GitHub PR? Having
> conversations on both is confusing and splitting information. I would
> rather use JIRA with patches or GitHub Issues with PRs. I think anything in
> between splits information and makes it hard to find.
>
> Eric
>
> On Sun, Mar 27, 2022 at 1:25 PM Akira Ajisaka  wrote:
>
>> Dear Hadoop developers,
>>
>> I've disabled the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs.
>> If you attach a patch to a JIRA issue, the Jenkins precommit job won't
>> run.
>> Please use GitHub PR for code review.
>>
>> Background:
>> -
>> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HADOOP-17798__;!!Op6eflyXZCqGR5I!Swsnm6LmEvbzZPTXn9xJuCkXtLBzb7zHkK2P_Cw-dH5K2IwoSEzQBC2oQG0D$
>> -
>> https://urldefense.com/v3/__https://lists.apache.org/thread/6g3n4wo3b3tpq2qxyyth3y8m9z4mcj8p__;!!Op6eflyXZCqGR5I!Swsnm6LmEvbzZPTXn9xJuCkXtLBzb7zHkK2P_Cw-dH5K2IwoSEzQBHA0JrdK$
>>
>> Thanks and regards,
>> Akira
>>
>


Re: [E] [NOTICE] Attaching patches in JIRA issue no longer works

2022-03-28 Thread Eric Badger
If we're not using patches on JIRA anymore, why are we using JIRA at all?
Why don't we just use GitHub Issues? Using JIRA to then redirect to GitHub
seems unintuitive and will fracture the information between two different
places. Do the conversations happen on JIRA or on a GitHub PR? Having
conversations on both is confusing and splitting information. I would
rather use JIRA with patches or GitHub Issues with PRs. I think anything in
between splits information and makes it hard to find.

Eric

On Sun, Mar 27, 2022 at 1:25 PM Akira Ajisaka  wrote:

> Dear Hadoop developers,
>
> I've disabled the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs.
> If you attach a patch to a JIRA issue, the Jenkins precommit job won't run.
> Please use GitHub PR for code review.
>
> Background:
> -
> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HADOOP-17798__;!!Op6eflyXZCqGR5I!Swsnm6LmEvbzZPTXn9xJuCkXtLBzb7zHkK2P_Cw-dH5K2IwoSEzQBC2oQG0D$
> -
> https://urldefense.com/v3/__https://lists.apache.org/thread/6g3n4wo3b3tpq2qxyyth3y8m9z4mcj8p__;!!Op6eflyXZCqGR5I!Swsnm6LmEvbzZPTXn9xJuCkXtLBzb7zHkK2P_Cw-dH5K2IwoSEzQBHA0JrdK$
>
> Thanks and regards,
> Akira
>


Re: [E] Re: [VOTE] Hadoop 3.1.x EOL

2021-06-03 Thread Eric Badger
+1

On Thu, Jun 3, 2021 at 6:14 AM Surendra Singh Lilhore <
surendralilh...@gmail.com> wrote:

> +1
>
> Thanks and Regards
> Surendra
>
>
> On Thu, Jun 3, 2021 at 4:31 PM tom lee  wrote:
>
> > +1
> >
> > Masatake Iwasaki  于2021年6月3日周四 下午5:43写道:
> >
> > > +1
> > >
> > > Masatake Iwasaki
> > >
> > > On 2021/06/03 15:14, Akira Ajisaka wrote:
> > > > Dear Hadoop developers,
> > > >
> > > > Given the feedback from the discussion thread [1], I'd like to start
> > > > an official vote
> > > > thread for the community to vote and start the 3.1 EOL process.
> > > >
> > > > What this entails:
> > > >
> > > > (1) an official announcement that no further regular Hadoop 3.1.x
> > > releases
> > > > will be made after 3.1.4.
> > > > (2) resolve JIRAs that specifically target 3.1.5 as won't fix.
> > > >
> > > > This vote will run for 7 days and conclude by June 10th, 16:00 JST
> [2].
> > > >
> > > > Committers are eligible to cast binding votes. Non-committers are
> > > welcomed
> > > > to cast non-binding votes.
> > > >
> > > > Here is my vote, +1
> > > >
> > > > [1]
> https://urldefense.proofpoint.com/v2/url?u=https-3A__s.apache.org_w9ilb=DwIFaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=7qPmaUPm6mGPWVMDM8Mw3T10AGE1J_pl-r1Asa9e238=rwL5wXoWZCvwORFw83YrWc7-vooDvzTqHkYdro380zg=
> > > > [2]
> > >
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.timeanddate.com_worldclock_fixedtime.html-3Fmsg-3D4-26iso-3D20210610T16-26p1-3D248=DwIFaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=7qPmaUPm6mGPWVMDM8Mw3T10AGE1J_pl-r1Asa9e238=Zg_od8AJUI5KResvSO-PiNJeb_l5JmtUT27Y-F_MhTQ=
> > > >
> > > > Regards,
> > > > Akira
> > > >
> > > > -
> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > > >
> > >
> > > -
> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


Re: [E] Re: Do I need a +1 to merge a backport PR?

2021-06-02 Thread Eric Badger
I'm of a similar opinion to most here. If the backport is clean, I think
it's ok to do it with just the +1 on the original patch. However, please
please please build the code on the target branch before backporting

Eric

On Wed, Jun 2, 2021 at 2:46 PM Ayush Saxena  wrote:

> For trivial changes, like changes in import or conflicts due to line
> number or other trivial stuff, I don’t think that is required. Unless the
> general  logic isn’t changing, we can go ahead, may be we can do a test run
> before merging, to be on the safer side as and when required. :-)
>
> -Ayush
>
> > On 02-Jun-2021, at 10:13 AM, Wei-Chiu Chuang  wrote:
> >
> > I'm curious about the GitHub PR conventions we use today... say I want
> to
> > backport a commit from trunk to branch-3.3, and there's a small code
> > conflict so I push a PR against branch-3.3 using GitHub to go through the
> > precommit check.
> >
> > Do I need explicit approval from another committer to merge the backport
> > PR? (As.a committer, I know I can merge at any time) or can I merge when
> > the precommit comes back okay?
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Java 8 Lambdas

2021-04-27 Thread Eric Badger
Hello all,

I'd like to gauge the community on the usage of lambdas within Hadoop code.
I've been reviewing a lot of patches recently that either add or modify
lambdas and I'm beginning to think that sometimes we, as a community, are
writing lambdas because we can rather than because we should. To me, it
seems that lambdas often decrease the readability of the code, making it
more difficult to understand. I don't personally know a lot about the
performance of lambdas and welcome arguments on behalf of why lambdas
should be used. An additional argument is that lambdas aren't available in
Java 7, and branch-2.10 currently supports Java 7. So any code going back
to branch-2.10 has to be redone upon backporting. Anyway, my main point
here is to encourage us to rethink whether we should be using lambdas in
any given circumstance just because we can.

Eric

p.s. I'm also happy to accept this as my personal "old man yells at cloud"
issue if everyone else thinks lambdas are the greatest


Re: [E] Re: [VOTE] Release Apache Hadoop 3.2.2 - RC4

2020-12-21 Thread Eric Badger
I've committed https://issues.apache.org/jira/browse/YARN-10540. Xiaoqiao,
feel free to cherry-pick this into the 3.2.2 release branch if you think it
is relevant.

Also, can someone tell me which releases we should be targeting? Currently
these versions are all Unreleased on JIRA:
3.4.1, 3.4.0
3.3.1, 3.3.0
3.2.3, 3.2.2

As far as I know, neither trunk nor 3.3 have a release going on. So I don't
know why there are 2 versions as unreleased there.

Eric

On Mon, Dec 21, 2020 at 3:20 PM Jim Brennan
 wrote:

> I put up a patch for
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_YARN-2D10540=DwIFaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=WEMl9mt9SZiq7P20DPgeZO69TFwf3d0eldQGRWlLyQg=FdkrkiWqHwuibT6qFqvApTkO5aTimLx7WeP74tka5XM=
> .
> Thanks for bringing it to my attention.
> Jim
>
> On Mon, Dec 21, 2020 at 10:36 AM Sunil Govindan  wrote:
>
> > I had some offline talks with a few folks.
> > This issue is happening only in Mac, hence ideally it does not cause much
> > of a problem in the supported OS.
> >
> > I will wait for feedback here to see whether we need another RC by fixing
> > this. And will continue the discussion in the jira.
> >
> > Thanks
> > Sunil
> >
> > On Sat, Dec 19, 2020 at 11:07 PM Sunil Govindan 
> wrote:
> >
> > > Thanks, Xiaoqiao.
> > > All files are looking good.
> > >
> > > However, while I did the tests to verify the RC, I ran into a serious
> NPE
> > > in YARN.
> > > I raised YARN-10540 <
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_YARN-2D10540=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=7Imi06B91L3gbxmt5ChzH4cwlA2_f2tmXh3OXmV9MLw=nSlLXPsCxZGl0VV03dBWreCNrSH0SsNAZzmjRWO-2Zg=8i-pN_j9VKNxmOzU6gYGtWm_IVyeZkBcMwVI2eyzpRk=
> > > to
> > > analyze this further. I think this issue due to YARN-10450
> > > <
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_YARN-2D10450=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=7Imi06B91L3gbxmt5ChzH4cwlA2_f2tmXh3OXmV9MLw=nSlLXPsCxZGl0VV03dBWreCNrSH0SsNAZzmjRWO-2Zg=QEHMGtEbBz5Gn7mW4UsGlc-wNZ8ugZwiFQBy2pTx-Fw=
> > >.
> > > In the trunk, I am not able to see this issue. So It could be possible
> > > that some patches are not backported to branch-3.2.2.
> > >
> > > UI1 & UI2 nodes page is not working at this moment. I will check a bit
> > > more to see about this and update here.
> > >
> > > Thanks
> > > Sunil
> > >
> > > On Sat, Dec 19, 2020 at 5:36 PM Xiaoqiao He 
> > wrote:
> > >
> > >> Thanks Sunil, md5 files have been removed from RC4. Please have a
> look.
> > >> Thanks & Regards.
> > >>
> > >> - He Xiaoqiao
> > >>
> > >> On Sat, Dec 19, 2020 at 7:22 PM Sunil Govindan 
> > wrote:
> > >>
> > >>> Hi Xiaoqiao,
> > >>>
> > >>> Please remove the md5 files from your shared RC4 repo. Thanks, @Akira
> > >>> Ajisaka  for sharing this input.
> > >>>
> > >>> Thanks
> > >>> Sunil
> > >>>
> > >>> On Sat, Dec 19, 2020 at 10:21 AM Sunil Govindan 
> > >>> wrote:
> > >>>
> >  Reference:
> > 
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.apache.org_dev_release-2Ddistribution.html-23sigs-2Dand-2Dsums=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=7Imi06B91L3gbxmt5ChzH4cwlA2_f2tmXh3OXmV9MLw=nSlLXPsCxZGl0VV03dBWreCNrSH0SsNAZzmjRWO-2Zg=0qrQgqFXZzLqTDPzH_T1emam7NnHvnzXqZ6Ag0ccgIQ=
> >  Also, we had a Jira to track this HADOOP-15930
> >  <
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_HADOOP-2D15930=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=7Imi06B91L3gbxmt5ChzH4cwlA2_f2tmXh3OXmV9MLw=nSlLXPsCxZGl0VV03dBWreCNrSH0SsNAZzmjRWO-2Zg=afUJEs9zxobnuEnC4v0xJ6v7d0dtrUkUaX15lI6VZbM=
> > >.
> > 
> >  Thanks
> >  Sunil
> > 
> >  On Sat, Dec 19, 2020 at 10:16 AM Sunil Govindan 
> >  wrote:
> > 
> > > Hi Xiaoqiao and Wei-chiu
> > >
> > > I am a bit confused after seeing both *.sha512 and *.md5 files in
> the
> > > RC directory.
> > > Are we releasing both now?
> > >
> > > Thanks
> > > Sunil
> > >
> > > On Wed, Dec 9, 2020 at 10:32 PM Xiaoqiao He  >
> > > wrote:
> > >
> > >> Hi folks,
> > >>
> > >> The release candidate (RC4) for Hadoop-3.2.2 is available now.
> > >> There are 10 commits[1] differences between RC4 and RC3[2].
> > >>
> > >> The RC4 is available at:
> > >>
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__people.apache.org_-7Ehexiaoqiao_hadoop-2D3.2.2-2DRC4=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=7Imi06B91L3gbxmt5ChzH4cwlA2_f2tmXh3OXmV9MLw=nSlLXPsCxZGl0VV03dBWreCNrSH0SsNAZzmjRWO-2Zg=HUI5h2vNLugUtOGnApsEXpk40-hQszUQyUNbfYRQeXU=
> > >> The RC4 tag in github is here:
> > >>
> >
> 

Re: [E] Re: Hadoop 3.2.2 Release Code Freeze Plan

2020-10-16 Thread Eric Badger
Hi Xiaoqiao,

I believe that
https://github.com/apache/hadoop/commit/3274fd139d9b612e449fc234f8804a2a97ae6c47
broke compilation for branch-3.2. Looks like you missed a 3.2.2 -> 3.2.3 in
pom.xml in the top-level directory.

https://github.com/apache/hadoop/blob/3274fd139d9b612e449fc234f8804a2a97ae6c47/pom.xml#L83

Eric

On Fri, Oct 16, 2020 at 6:43 AM Sunil Govindan  wrote:

> Looks good to me.
>
> Thanks Xiaoquio.
>
> + Sunil
>
> On Fri, Oct 16, 2020 at 12:36 PM Xiaoqiao He 
> wrote:
>
> > Thanks Sunil for your reminder,
> >
> > branch-3.2.2 is ready now [1] and also mark branch-3.2 to 'prepare for
> > 3.2.3 development' [2].
> > Please help to give another check if you have time.
> >
> > Thanks.
> >
> > [1]
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_hadoop_commits_branch-2D3.2.2=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=QopE4z-adN2ofsasgUJBO9T6otSqQwA_lvjlDvzaDj4=5VhJo-jLXCdf8Db8WO06oowMfEFT7pIio255mBm1Flg=
> > [2]
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_hadoop_commits_branch-2D3.2=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=QopE4z-adN2ofsasgUJBO9T6otSqQwA_lvjlDvzaDj4=zESRGqLVBSaekRIRFHYT2n5LLpVQNoVEZGIz5WRLRFw=
> >
> > On Fri, Oct 16, 2020 at 12:03 PM Sunil Govindan 
> wrote:
> >
> >> Thank you
> >>
> >> Did we also cut the branch already ?
> >>
> >> Sunil
> >>
> >> On Fri, 16 Oct 2020 at 9:14 AM, Xiaoqiao He 
> >> wrote:
> >>
> >>> Hi All,
> >>>
> >>> All issues targeted to 3.2.2 have been resolved from now on[1], Thanks
> >>> everyone for your works.
> >>> Will try to operate code frozen and create the release candidate today.
> >>> Thanks.
> >>>
> >>> [1]
> >>>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_secure_Dashboard.jspa-3FselectPageId-3D12335948=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=QopE4z-adN2ofsasgUJBO9T6otSqQwA_lvjlDvzaDj4=eACU__Fq-l9QTTfNhg2QTdUGZd5EnoEF_GoZVX7SVus=
> >>>
> >>> On Fri, Oct 2, 2020 at 2:34 PM Xiaoqiao He 
> >>> wrote:
> >>>
>  Thanks to Wei-Chiu for your reminder, the dashboard and filters have
>  been updated. It is visible to the public now.
> 
>  Thanks, Regards.
> 
>  [1]
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_secure_Dashboard.jspa-3FselectPageId-3D12335948=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=QopE4z-adN2ofsasgUJBO9T6otSqQwA_lvjlDvzaDj4=eACU__Fq-l9QTTfNhg2QTdUGZd5EnoEF_GoZVX7SVus=
> 
> 
>  On Fri, Oct 2, 2020 at 6:30 AM Wei-Chiu Chuang 
>  wrote:
> 
> > Thanks Xiaoqiao!
> > Glad to see this is moving along. I noticed your dashboard has
> private
> > filters and therefore the results are not visible publicly.
> >
> > On Tue, Sep 29, 2020 at 10:02 PM Xiaoqiao He 
> > wrote:
> >
> > > Hi All,
> > >
> > > Plan to code frozen for Hadoop-3.2.2 release at 2020/10/15. From
> now
> > on,
> > > most of the issues have been resolved. And there are two JIRA still
> > open
> > > which target to 3.2.2 traced by [1].
> > >
> > > *
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_YARN-2D10244=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=QopE4z-adN2ofsasgUJBO9T6otSqQwA_lvjlDvzaDj4=DMkkzyzTBFbE3eek-Rn9bNABvCEVAp4d3KIhrvuWEc8=
> > > *
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_HADOOP-2D17287=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=QopE4z-adN2ofsasgUJBO9T6otSqQwA_lvjlDvzaDj4=BadkXOH-nNJiIeBOZSmp9LyKiO699BcqPqkzr-_MT14=
> > >
> > > Please let us know if this is really blocking for 3.2.2, if not
> > kindly
> > > move it out. If required to involve in 3.2.2, Please try to push
> them
> > > forward recently.
> > >
> > > Thanks & Best Regards,
> > > He Xiaoqiao
> > >
> > > [1]
> > >
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_secure_Dashboard.jspa-3FselectPageId-3D12335948=DwIBaQ=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY=KVdP1SUmHYb-tZP8tcigmw=QopE4z-adN2ofsasgUJBO9T6otSqQwA_lvjlDvzaDj4=eACU__Fq-l9QTTfNhg2QTdUGZd5EnoEF_GoZVX7SVus=
> > >
> >
> 
>


Remove whitelist/blacklist terminology from Hadoop

2020-09-21 Thread Eric Badger
Hi,

Could I get some people to review this JIRA? It is a large patch and
touches a lot of files. Additionally, it makes some incompatible changes,
notably changing the names of config keys.

https://issues.apache.org/jira/browse/HADOOP-17169

Eric


Re: [E] Re: [VOTE] Release Apache Hadoop 2.10.1 (RC0)

2020-09-18 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on RHEL 7
- Deployed a single node pseudo cluster
- Ran some example jobs

Only minor thing is that the DockerLinuxContainerRuntime won't run in a
secure environment using kerberos and nscd because there is no way to
bind-mount the nscd socket without modifying the source code. However,
DockerLinuxContainerRuntime is wholly experimental in 2.10 and anyone using
it should be on 3.x

Eric

On Fri, Sep 18, 2020 at 3:21 PM Eric Payne 
wrote:

> Masatake,
>
> Thank you for the good work on creating this release!
>
> +1
>
> I downloaded and built the source. I ran a one-node cluster with 6 NMs.
> I manually ran apps in the Capacity Scheduler to test labels and capacity
> assignments.
>
> -Eric
>
>
> On Monday, September 14, 2020, 12:59:17 PM CDT, Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp> wrote:
>
> Hi folks,
>
> This is the first release candidate for the second release of Apache
> Hadoop 2.10.
> It contains 218 fixes/improvements since 2.10.0 [1].
>
> The RC0 artifacts are at:
>
> https://urldefense.com/v3/__http://home.apache.org/*iwasakims/hadoop-2.10.1-RC0/__;fg!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKio96j4FMZ$
>
> RC tag is release-2.10.1-RC0:
>
> https://urldefense.com/v3/__https://github.com/apache/hadoop/tree/release-2.10.1-RC0__;!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKio254keyy$
>
> The maven artifacts are hosted here:
>
> https://urldefense.com/v3/__https://repository.apache.org/content/repositories/orgapachehadoop-1279/__;!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKioxzi9JE1$
>
> My public key is available here:
>
> https://urldefense.com/v3/__https://dist.apache.org/repos/dist/release/hadoop/common/KEYS__;!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKioz0s1m6i$
>
> The vote will run for 5 days, until Saturday, September 19 at 10:00 am PDT.
>
> [1]
> https://urldefense.com/v3/__https://issues.apache.org/jira/issues/?jql=project*20in*20(HDFS*2C*20YARN*2C*20HADOOP*2C*20MAPREDUCE)*20AND*20resolution*20*3D*20Fixed*20AND*20fixVersion*20*3D*202.10.1__;JSUlJSUlJSUlJSUlJSUlJSUl!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKiox7cDnhJ$
>
> Thanks,
> Masatake Iwasaki
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


Unstable Unit Tests in Trunk

2020-09-01 Thread Eric Badger
While putting up patches for HADOOP-17169
 I noticed that the
unit tests in trunk, specifically in HDFS, are incredibly unstable. Every
time I put up a new patch, 4-8 unit tests failed with failures that were
completely unrelated to the patch. I'm pretty confident in that since the
patch is simply changing variable names. I also ran the unit tests locally
and they would pass (or fail intermittently).

Is there an effort to stabilize the unit tests? I don't know if these are
bugs or if they're bad tests. But in either case, it's bad for the
stability of the project.

Eric


Re: [E] Re: A more inclusive elephant...

2020-08-31 Thread Eric Badger
https://issues.apache.org/jira/browse/HADOOP-17169

I don't really know who the people are to review this patch, but it removes
all non-inclusive terminology from Hadoop Common. This ends up changing
some things in some other projects (mostly HDFS) as well since they depend
on stuff from hadoop-common. I believe patch 003 is ready for review and
would appreciate some experts letting me know if anything is a bad change
code-wise.

Eric

On Thu, Jul 30, 2020 at 5:59 PM Vivek Ratnavel Subramanian
 wrote:

> Hi Eric and Carlo,
>
> Thanks for taking the initiative! I am willing to take this task up for
> improving the Ozone codebase.
>
> I have cloned the task and sub-tasks for Ozone -
>
> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HDDS-4050__;!!Op6eflyXZCqGR5I!VdShoY1ZPxJVYhyFFCSuGX4gU-2R6sHAr7G_HH0W5YjeJluizw7npVPF4ULP$
>
> - Vivek Subramanian
>
> On Thu, Jul 30, 2020 at 3:54 PM Eric Badger
>  wrote:
>
> > Thanks for the responses, Jon and Carlo!
> >
> > It makes sense to me to prevent future patches from re-introducing the
> > terminology. I can file a JIRA to add the +1/-1 functionality to the
> > precommit builds.
> >
> > As for splitting up the work, I think it'll probably be easiest and
> > cleanest to have an umbrella for each subproject of Hadoop (Hadoop, HDFS,
> > YARN, Mapreduce) with smaller tasks (e.g. whitelist/blacklist,
> > master/slave) as subtasks of each umbrella. That way each expert can
> chime
> > in on their relative land of expertise and the patches won't be
> gigantic. I
> > can then link the umbrella JIRAs together so everything can be found
> > easily. As Carlo pointed out, it's unclear whether fewer, but larger
> > patches is better or worse than more, smaller patches. But I think that
> at
> > least for the sake of manageability and getting this into Apache, smaller
> > patches is likely easier.
> >
> > Eric
> >
> > On Thu, Jul 30, 2020 at 5:50 PM Carlo Aldo Curino <
> carlo.cur...@gmail.com>
> > wrote:
> >
> > > Thanks again Eric for leading the charge. As for whether to chop it up
> or
> > > keep it in fewer patches, I think it primarily impact the conflict
> > surface
> > > with dev branches and other in-flight development. More patches are
> > likely
> > > creating more localized clashes (as in I clash with a smaller patch,
> > which
> > > might be less daunting, though potentially more of them to deal with).
> I
> > > don't have a strong preference, maybe chunking it into reasonable
> > packages,
> > > so that you can involve the right core group of committers to way in
> for
> > > each sub-area.
> > >
> > > Thanks,
> > > Carlo
> > >
> > >
> > >
> > > On Thu, Jul 30, 2020 at 1:20 PM Jonathan Eagles 
> > wrote:
> > >
> > > > Thanks, Eric. I like this proposal and I'm glad this work is getting
> > > > traction. A few thoughts on implementation.
> > > >
> > > > Once the fix is done, I think it will be necessary to ensure these
> > > > language restrictions are enforced at the patch level. This will
> +1/-1
> > > > patches that introduce terminology that violate our policy.
> > > >
> > > > As to splitting up the patches, it may be necessary to to split these
> > up
> > > > further in cases where feature experts need to weigh in on
> > compatibility
> > > > (usually with regards to persistence or wire compatibility). This can
> > be
> > > > done case-by-case basis.
> > > >
> > > > Regards,
> > > > jeagles
> > > >
> > > > On Thu, Jul 30, 2020 at 1:28 PM Eric Badger
> > > >  wrote:
> > > >
> > > >> I have created
> > >
> >
> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HADOOP-17168__;!!Op6eflyXZCqGR5I!XjCu5VSFdt2uqyuzlkc53KSBa6IM-M2Wun_FX6uD8fl99OAvaj9wb-0kz4fK$
> > > to
> > > >> remove
> > > >> non-inclusive terminology from Hadoop. However I would like input on
> > how
> > > >> to
> > > >> go about putting up patches. This umbrella JIRA is under Hadoop
> > Common,
> > > >> but
> > > >> there are sure to be instances in YARN, HDFS, and Mapreduce. Should
> I
> > > >> create an umbrella like this for each subproject? Or should I do all
> > > >> whitelist/blacklist fixes in a single JIRA that fixes them across
> all
> > > >> Hadoop subproj

[jira] [Created] (HADOOP-17187) Report non-inclusive language as part of code contribution pre-commit check

2020-08-05 Thread Eric Badger (Jira)
Eric Badger created HADOOP-17187:


 Summary: Report non-inclusive language as part of code 
contribution pre-commit check
 Key: HADOOP-17187
 URL: https://issues.apache.org/jira/browse/HADOOP-17187
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Eric Badger






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [E] [ANNOUNCE] Jim Brennan is a new Hadoop Committer

2020-08-03 Thread Eric Badger
Congrats, Jim! Well-deserved and much overdue!

Eric

On Mon, Aug 3, 2020 at 11:38 AM epa...@apache.org  wrote:

> I am pleased to announce that Jim Brennan has accepted the invitation to
> become a Hadoop committer focusing on the YARN space.
>
> Please reach out to Jim and welcome him in his new role.
>
> Congratulations, Jim! Well-deserved!
>
> -Eric Payne
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [E] Re: A more inclusive elephant...

2020-07-30 Thread Eric Badger
Thanks for the responses, Jon and Carlo!

It makes sense to me to prevent future patches from re-introducing the
terminology. I can file a JIRA to add the +1/-1 functionality to the
precommit builds.

As for splitting up the work, I think it'll probably be easiest and
cleanest to have an umbrella for each subproject of Hadoop (Hadoop, HDFS,
YARN, Mapreduce) with smaller tasks (e.g. whitelist/blacklist,
master/slave) as subtasks of each umbrella. That way each expert can chime
in on their relative land of expertise and the patches won't be gigantic. I
can then link the umbrella JIRAs together so everything can be found
easily. As Carlo pointed out, it's unclear whether fewer, but larger
patches is better or worse than more, smaller patches. But I think that at
least for the sake of manageability and getting this into Apache, smaller
patches is likely easier.

Eric

On Thu, Jul 30, 2020 at 5:50 PM Carlo Aldo Curino 
wrote:

> Thanks again Eric for leading the charge. As for whether to chop it up or
> keep it in fewer patches, I think it primarily impact the conflict surface
> with dev branches and other in-flight development. More patches are likely
> creating more localized clashes (as in I clash with a smaller patch, which
> might be less daunting, though potentially more of them to deal with). I
> don't have a strong preference, maybe chunking it into reasonable packages,
> so that you can involve the right core group of committers to way in for
> each sub-area.
>
> Thanks,
> Carlo
>
>
>
> On Thu, Jul 30, 2020 at 1:20 PM Jonathan Eagles  wrote:
>
> > Thanks, Eric. I like this proposal and I'm glad this work is getting
> > traction. A few thoughts on implementation.
> >
> > Once the fix is done, I think it will be necessary to ensure these
> > language restrictions are enforced at the patch level. This will +1/-1
> > patches that introduce terminology that violate our policy.
> >
> > As to splitting up the patches, it may be necessary to to split these up
> > further in cases where feature experts need to weigh in on compatibility
> > (usually with regards to persistence or wire compatibility). This can be
> > done case-by-case basis.
> >
> > Regards,
> > jeagles
> >
> > On Thu, Jul 30, 2020 at 1:28 PM Eric Badger
> >  wrote:
> >
> >> I have created
> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HADOOP-17168__;!!Op6eflyXZCqGR5I!XjCu5VSFdt2uqyuzlkc53KSBa6IM-M2Wun_FX6uD8fl99OAvaj9wb-0kz4fK$
> to
> >> remove
> >> non-inclusive terminology from Hadoop. However I would like input on how
> >> to
> >> go about putting up patches. This umbrella JIRA is under Hadoop Common,
> >> but
> >> there are sure to be instances in YARN, HDFS, and Mapreduce. Should I
> >> create an umbrella like this for each subproject? Or should I do all
> >> whitelist/blacklist fixes in a single JIRA that fixes them across all
> >> Hadoop subprojects?
> >>
> >> Thanks,
> >>
> >> Eric
> >>
> >> On Thu, Jul 30, 2020 at 8:47 AM Carlo Aldo Curino <
> carlo.cur...@gmail.com
> >> >
> >> wrote:
> >>
> >> > RE Mentorship: I think the Mentorship program is an interesting idea.
> >> The
> >> > concerns with these efforts is always the follow-through. If you can
> >> find a
> >> > group of folks that are motivated and will work on this I think it
> >> could be
> >> > a great idea, especially if you focus on a diverse set of mentees, and
> >> the
> >> > focus in on teaching not just code but a bit of the "apache way" of
> >> > interacting, and conducting yourself in open-source.
> >> >
> >> > RE Diversity and representation: Wei-Chiu I think you raise an
> important
> >> > problem. The main force behind this is typically for a company to be
> >> deeply
> >> > invested in a project and valuing OSS  and putting lots full-time
> >> > developers on it. Those will naturally become committers. On one side
> >> this
> >> > is good to the project, unless it becomes so unbalance that the OSS
> >> nature
> >> > of the effort is in question. Attracting more contributors across
> >> > companies/countries (and any other dimension of diversity is
> important)
> >> > @Vinod I am sure you have been thinking about this issue, any
> thoughts?
> >> >
> >> > Thanks,
> >> > Carlo
> >> >
> >> > On Fri, Jul 10, 2020 at 1:49 PM Ahmed Hussein  wrote:
> >> >
> >> >> +1, thi

Re: [E] Re: A more inclusive elephant...

2020-07-30 Thread Eric Badger
I have created https://issues.apache.org/jira/browse/HADOOP-17168 to remove
non-inclusive terminology from Hadoop. However I would like input on how to
go about putting up patches. This umbrella JIRA is under Hadoop Common, but
there are sure to be instances in YARN, HDFS, and Mapreduce. Should I
create an umbrella like this for each subproject? Or should I do all
whitelist/blacklist fixes in a single JIRA that fixes them across all
Hadoop subprojects?

Thanks,

Eric

On Thu, Jul 30, 2020 at 8:47 AM Carlo Aldo Curino 
wrote:

> RE Mentorship: I think the Mentorship program is an interesting idea. The
> concerns with these efforts is always the follow-through. If you can find a
> group of folks that are motivated and will work on this I think it could be
> a great idea, especially if you focus on a diverse set of mentees, and the
> focus in on teaching not just code but a bit of the "apache way" of
> interacting, and conducting yourself in open-source.
>
> RE Diversity and representation: Wei-Chiu I think you raise an important
> problem. The main force behind this is typically for a company to be deeply
> invested in a project and valuing OSS  and putting lots full-time
> developers on it. Those will naturally become committers. On one side this
> is good to the project, unless it becomes so unbalance that the OSS nature
> of the effort is in question. Attracting more contributors across
> companies/countries (and any other dimension of diversity is important)
> @Vinod I am sure you have been thinking about this issue, any thoughts?
>
> Thanks,
> Carlo
>
> On Fri, Jul 10, 2020 at 1:49 PM Ahmed Hussein  wrote:
>
>> +1, this is great folks.
>>
>> In addition to that initiative, Do you think there is a chance to launch
>> a "*Hadoop Mentorship Program for Minority Students*"
>>
>> *The program will work as follows:*
>>
>>- Define a programme committee to administrate and mentor candidates.
>>- The Committee defines a timeline for applications and projects.
>>Let's say it is some sort of 3 months. (Similar to an internship)
>>- Define a list of ideas/projects that can be picked by the candidates
>>- Candidates can propose their idea as well. This can be a good way
>>to inject new blood and research ideas into Hadoop.
>>- Pick top top applications and assign them to mentors.
>>- If sponsors can allocate money, then candidates with good
>>evaluation can get some sort of prize. If no money is allocated, then we
>>can discuss any other kind of motivation.
>>
>> I remember there were Student Mentorship programmes in Open source
>> projects like "JikesRVM" and several proposals were actually merged and/or
>> transformed into publications.
>> There are many missing links that need to be filled like how to define
>> the target and the audience of the programme
>>
>> Let me know WDYT guys.
>>
>> On Fri, Jul 10, 2020 at 1:45 PM Wei-Chiu Chuang 
>> wrote:
>>
>>> Thanks Carlo and Eric for the initiative.
>>>
>>> I am all for it and I'll do my part to mind the code. This is a small yet
>>> meaningful step we can take. Meanwhile, I'd like to take this opportunity
>>> to open up conversation around the Diversity & Inclusion within the
>>> community.
>>>
>>> If you read this quarter's Hadoop board report, I am starting to collect
>>> metrics about the composition of our community in order to understand if
>>> we
>>> are building a diverse & inclusive community. Things that are obvious to
>>> me
>>> that I thought I should report are the following: affiliation among
>>> commiters, and demographics of committers. As of last quarter, 4 out of 7
>>> newly minted committers are affiliated with Cloudera. 4 out of the 7 said
>>> committers are located in Asia. Those facts suggest we have a good
>>> international participation (I am being US-centric), which is good.
>>> However, having half of the active committers affiliated with one company
>>> is a potential problem.
>>>
>>> I'd like to hear your thoughts on this. What other metrics should we
>>> collect, and what actions can we take.
>>>
>>>
>>>
>>> On Fri, Jul 10, 2020 at 11:29 AM Carlo Aldo Curino <
>>> carlo.cur...@gmail.com>
>>> wrote:
>>>
>>> > Eric,
>>> >
>>> > Thank you so much for the support and for stepping up offering to work
>>> on
>>> > this. I am super +1 on this. Let's give folks a few more days to chime
>>> in,
>>> 

[jira] [Created] (HADOOP-17170) Remove master/slave terminology from Hadoop

2020-07-30 Thread Eric Badger (Jira)
Eric Badger created HADOOP-17170:


 Summary: Remove master/slave terminology from Hadoop
 Key: HADOOP-17170
 URL: https://issues.apache.org/jira/browse/HADOOP-17170
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Eric Badger






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17169) Remove whitelist/blacklist from Hadoop

2020-07-30 Thread Eric Badger (Jira)
Eric Badger created HADOOP-17169:


 Summary: Remove whitelist/blacklist from Hadoop
 Key: HADOOP-17169
 URL: https://issues.apache.org/jira/browse/HADOOP-17169
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Eric Badger
Assignee: Eric Badger






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17168) Remove non-inclusive terminology from Hadoop

2020-07-30 Thread Eric Badger (Jira)
Eric Badger created HADOOP-17168:


 Summary: Remove non-inclusive terminology from Hadoop
 Key: HADOOP-17168
 URL: https://issues.apache.org/jira/browse/HADOOP-17168
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Eric Badger


http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/202007.mbox/%3CCAAaVJWVXhsv4tn1KOQkKYTaQ441Yb8y7s%2BR_GnESwduB1iFxOA%40mail.gmail.com%3E

This JIRA is to remove offensive and non-inclusive terminology from Hadoop. The 
simple ones are whitelist/blacklist and master/slave. However this JIRA can 
also serve as a place to fix other non-inclusive terminology (e.g., binary 
gendered
examples, "Alice" doing the wrong security thing systematically).

As [~curino] posted in his email, the IETF has created a draft for proposed 
alternatives
https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread Eric Badger
+1 (binding)

- Built from source on RHEL 7.6
- Deployed on a single-node cluster
- Verified DefaultContainerRuntime
- Verified RuncContainerRuntime (after setting things up with the
docker-to-squash tool available on YARN-9564)

Eric


Re: A more inclusive elephant...

2020-07-10 Thread Eric Badger
Thanks for writing this up, Carlo. I'm +1 (idk if I'm technically binding
on this or not) for the changes moving forward and I think we refactor away
any instances that are internal to the code (i.e. not APIs or other things
that would break compatibility) in all active branches and then also change
the APIs in trunk (an incompatible change).

I just came across an internal issue related to the NM whitelist/blacklist.
I would be happy to go refactor the code and look for instances of these
and replace them with allowlist/blocklist. Doing a quick "git grep" of
trunk, I see 270 instances of "whitelist" and 1318 instances of
"blacklist".

If there are no objections, I'll create a JIRA to clean this specific stuff
up. It would be wonderful if others could pick up a different portion (e.g.
master/slave) so that we can spread the work out.

Eric

On Tue, Jul 7, 2020 at 6:27 PM Carlo Aldo Curino 
wrote:

> Hello Folks,
>
> I hope you are all doing well...
>
> *The problem*
> The recent protests made me realize that we are not just a bystanders of
> the systematic racism that affect our society, but we are active
> participants of it. Being "non-racist" is not enough, I strongly feel we
> should be actively "anti-racist" in our day to day lives, and continuously
> check our biases. I assume most of you will agree with the general
> sentiment, but based on your exposure to the recent events and US
> culture/history might have more or less strong feelings about your role in
> the problem and potential solution.
>
> *What can we do about it?* I think a simple action we can take is to work
> on our code/comments/documentation/websites and remove racist terminology.
> Here is a IETF draft to fix up some of the most egregious examples
> (master/slave, whitelist/backlist) with proposed alternatives.
>
> https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1.1
> Also as we go about this effort, we should also consider other
> "non-inclusive" terminology issues around gender (e.g., binary gendered
> examples, "Alice" doing the wrong security thing systematically), and
> ableism (e.g., referring to misbehaving hardware as "lame" or "limping",
> etc.).
> The easiest action item is to avoid this going forward (ideally adding it
> to the checkstyles if possible), a more costly one is to start going back
> and refactor away existing instances.
>
> I know this requires a bunch of work as refactorings might break dev
> branches and non-committed patches, possibly scripts, etc. but I think this
> is something important and relatively simple we can do. The effect goes
> well beyond some text in github, it signals what we believe in, and forces
> hundreds of users and contributors to notice and think about it. Our
> force-multiplier is huge and it matches our responsibility.
>
> What do you folks think?
>
> Thanks,
> Carlo
>


Re: [DISCUSS] Removing the archiac master branch

2020-06-19 Thread Eric Badger
+1

Eric

On Fri, Jun 19, 2020 at 1:14 PM Ayush Saxena  wrote:

> Thanx Owen for initiating.
> +1
> -Ayush
>
> > On 19-Jun-2020, at 10:52 PM, Zhe Zhang  wrote:
> >
> > +1 Thanks Owen
> >
> > I scratched my head for a while and couldn't think of a downside.
> >
> >> On Fri, Jun 19, 2020 at 10:20 AM Owen O'Malley 
> >> wrote:
> >>
> >> We unfortunately have a lot of master/slave and whitelist/blacklist
> >> terminology usage in Hadoop. It will take a while to fix them all, but
> one
> >> is easy to fix. In particular, we have a "master" branch that hasn't
> been
> >> used since the project reunification and we use "trunk" as the main
> branch.
> >>
> >> I propose that we delete the "master" branch. Thoughts?
> >>
> >> .. Owen
> >>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: Next Hadoop Storage Online Meetup (APAC Mandarin)

2019-12-19 Thread Eric Badger
For those of us that don't speak Mandarin, would someone be able to take
notes in English? I'm very interested in hearing about the experience in
moving from Hadoop 2.x to 3.x.

Eric

On Thu, Dec 19, 2019 at 2:07 PM Wei-Chiu Chuang
 wrote:

> As you have probably aware, DiDi upgrade a large cluster from Hadoop 2 to
> Hadoop3 recently.
>
> Fei Hui from DiDi graciously agreed to speak to us their upgrade experience
> at the next APAC Mandarin Online meetup which is in two weeks.
>
> So stay tuned!
>
> Time/Date:
> Jan 1 10PM (US west coast PST) / Jan 2 2pm (Beijing, China CST)
>


Re: [DISCUSS] Making 2.10 the last minor 2.x release

2019-11-19 Thread Eric Badger
Hello all,

Is it written anywhere what the difference is between a minor release and a
point/dot/maintenance (I'll use "point" from here on out) release? I have
looked around and I can't find anything other than some compatibility
documentation in 2.x that has since been removed in 3.x [1] [2]. I think
this would help shape my opinion on whether or not to keep branch-2 alive.
My current understanding is that we can't really break compatibility in
either a minor or point release. But the only mention of the difference
between minor and point releases is how to deal with Stable, Evolving, and
Unstable tags, and how to deal with changing default configuration values.
So it seems like there really isn't a big official difference between the
two. In my mind, the functional difference between the two is that the
minor releases may have added features and rewrites, while the point
releases only have bug fixes. This might be an incorrect understanding, but
that's what I have gathered from watching the releases over the last few
years. Whether or not this is a correct understanding, I think that this
needs to be documented somewhere, even if it is just a convention.

Given my assumed understanding of minor vs point releases, here are the
pros/cons that I can think of for having a branch-2. Please add on or
correct me for anything you feel is missing or inadequate.
Pros:
- Features/rewrites/higher-risk patches are less likely to be put into
2.10.x
- It is less necessary to move to 3.x

Cons:
- Bug fixes are less likely to be put into 2.10.x
- An extra branch to maintain
  - Committers have an extra branch (5 vs 4 total branches) to commit
patches to if they should go all the way back to 2.10.x
- It is less necessary to move to 3.x

So on the one hand you get added stability in fewer features being
committed to 2.10.x, but then on the other you get fewer bug fixes being
committed. In a perfect world, we wouldn't have to make this tradeoff. But
we don't live in a perfect world and committers will make mistakes either
because of lack of knowledge or simply because they made a mistake. If we
have a branch-2, committers will forget, not know to, or choose not to (for
whatever reason) commit valid bug fixes back all the way to branch-2.10. If
we don't have a branch-2, committers who want their borderline risky
feature in the 2.x line will err on the side of putting it into branch-2.10
instead of proposing the creation of a branch-2. Clearly I have made quite
a few assumptions here based on my own experiences, so I would like to hear
if others have similar or opposing views.

As far as 3.x goes, to me it seems like some of the reasoning for killing
branch-2 is due to an effort to push the community towards 3.x. This is why
I have added movement to 3.x as both a pro and a con. As a community trying
to move forward, keeping as many companies on similar branches as possible
is a good way to make sure the code is well-tested. However, from a
stability point of view, moving to 3.x is still scary and being able to
stay on 2.x until you are comfortable to move is very nice. The 2.10.0
bridge release effort has been very good at making it possible for people
to move from 2.x in 3.x, but the diff between 2.x and 3.x is so large that
it is reasonable for companies to want to be extra cautious with 3.x due to
potential performance degradation at large scale.

A question I'm pondering is what happens when we move to Java 11 and
someone is still on 2.x? If they want to backport HADOOP-15338
 for Java 11 support to
2.x, surely not everyone is going to want that (at least not immediately).
The 2.10 documentation states, "The JVM requirements will not change across
point releases within the same minor release except if the JVM version
under question becomes unsupported" [1], so this would warrant a 2.11
release until Java 8 becomes unsupported (though one could argue that it is
already unsupported since Oracle is no longer giving public Java 8 update).
If we don't keep branch-2 around now, would a Java 11 backport be the
catalyst for a branch-2 revival?

Not sure if this really leads to any sort of answer from me on whether or
not we should keep branch-2 alive, but these are the things that I am
weighing in my mind. For me, the bigger problem beyond having branch-2 or
not is committers not being on the same page with where they should commit
their patches.

Eric

[1]
https://hadoop.apache.org/docs/r2.10.0/hadoop-project-dist/hadoop-common/Compatibility.html
[2]
https://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/Compatibility.html

On Tue, Nov 19, 2019 at 2:49 PM epa...@apache.org  wrote:

> Hi Konstantin,
>
> Sure, I understand those concerns. On the other hand, I worry about the
> stability of 2.10, since we will be on it for a couple of years at least.
> I worry
>  that some committers may want to put new features into a branch 2 release,
>  and without a 

Re: [VOTE] Release Apache Hadoop 2.10.0 (RC0)

2019-10-22 Thread Eric Badger
Hi Jonathan,

Thanks for putting this RC together. You stated that there are improvements
related to rolling upgrades from 2.x to 3.x and I know I have seen multiple
JIRAs getting committed to that effect. Could you describe any tests that
you have done to verify rolling upgrade compatibility for 3.x servers
talking to 2.x clients and vice versa?

Thanks,

Eric

On Tue, Oct 22, 2019 at 1:49 PM Jonathan Hung  wrote:

> Thanks Konstantin and Zhankun. Unfortunately a feature slipped our radar
> (HDFS-14667). Since this is the first of a minor release, we would like to
> get it into 2.10.0.
>
> HDFS-14667 has been committed to branch-2.10.0, I will be rolling an RC1
> shortly.
>
> Jonathan Hung
>
>
> On Tue, Oct 22, 2019 at 1:39 AM Zhankun Tang  wrote:
>
> > Thanks for the effort, Jonathan!
> >
> > +1 (non-binding) on RC0.
> >  - Set up a single node cluster with the binary tarball
> >  - Run Spark Pi and pySpark job
> >
> > BR,
> > Zhankun
> >
> > On Tue, 22 Oct 2019 at 14:31, Konstantin Shvachko 
> > wrote:
> >
> >> +1 on RC0.
> >> - Verified signatures
> >> - Built from sources
> >> - Ran unit tests for new features
> >> - Checked artifacts on Nexus, made sure the sources are present.
> >>
> >> Thanks
> >> --Konstantin
> >>
> >>
> >> On Wed, Oct 16, 2019 at 6:01 PM Jonathan Hung 
> >> wrote:
> >>
> >> > Hi folks,
> >> >
> >> > This is the first release candidate for the first release of Apache
> >> Hadoop
> >> > 2.10 line. It contains 361 fixes/improvements since 2.9 [1]. It
> includes
> >> > features such as:
> >> >
> >> > - User-defined resource types
> >> > - Native GPU support as a schedulable resource type
> >> > - Consistent reads from standby node
> >> > - Namenode port based selective encryption
> >> > - Improvements related to rolling upgrade support from 2.x to 3.x
> >> >
> >> > The RC0 artifacts are at:
> >> http://home.apache.org/~jhung/hadoop-2.10.0-RC0/
> >> >
> >> > RC tag is release-2.10.0-RC0.
> >> >
> >> > The maven artifacts are hosted here:
> >> >
> >>
> https://repository.apache.org/content/repositories/orgapachehadoop-1241/
> >> >
> >> > My public key is available here:
> >> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >> >
> >> > The vote will run for 5 weekdays, until Wednesday, October 23 at 6:00
> pm
> >> > PDT.
> >> >
> >> > Thanks,
> >> > Jonathan Hung
> >> >
> >> > [1]
> >> >
> >> >
> >>
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20YARN%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.10.0%20AND%20fixVersion%20not%20in%20(2.9.2%2C%202.9.1%2C%202.9.0)
> >> >
> >>
> >
>


Re: Incompatible changes between branch-2.8 and branch-2.9

2019-09-25 Thread Eric Badger
* For YARN-7813, not sure why moving from 2.8.4/5 -> 2.8.6 would be
incompatible with this strategy? It should be OK to remove/add optional
fields (removing the field with id 12, and adding the field with id 13)
  - A I misunderstood. I was thinking that the field was overwritten in
branch-2.8 as well. Yea I think that approach will be fine.

On Wed, Sep 25, 2019 at 2:31 AM Robert Kanter  wrote:

> >
> > *   For YARN-6050, there's a bit here:
> > https://developers.google.com/protocol-buffers/docs/proto that says
> > "optional is compatible with repeated", so I think we should be OK there.
> >   - Optional is compatible with repeatable over the wire such that
> > protobuf won't blow up, but does that actually mean that it's compatible
> in
> > this case? If it's expecting an optional and gets a repeated, it's going
> to
> > drop everything except for the last value. I don't know enough about
> > YARN-6050 to say if this will be ok or not.
>
>
> It's been a while since I looked into this, but I think it should be okay.
>  If an older client (using optional) sends the message to a newer server
> (using repeated), then there will never be more than one value for the
> field.  The server puts these into a list, so the list would simply have a
> single value in it.  The server's logic should be able to handle a single
> valued list here because (a) IIRC we wanted to make sure compatibility
> wasn't a problem (Cloudera supported rolling upgrades between CDH 5.x so
> this was important) and (b) sending a single resource request, even in a
> newer client, is a still a valid thing to do.
> If a newer client (using repeated) sends the message to an older server
> (using optional), I'm not sure what will happen.  My guess is that it will
> drop the extra values (though I wonder if it will keep the first or last
> value...).  In any case, I believe most clients will only send the one
> value - in order for a client to send multiple values, you'd have to
> specify some additional MR configs (see MAPREDUCE-6871).  IIRC, there's
> also a SPARK JIRA similar to MAPREDUCE-6871, but I can't find it right now.
>
> - Robert
>
> On Tue, Sep 24, 2019 at 9:49 PM Jonathan Hung 
> wrote:
>
> > - I've created YARN-9855 and uploaded patches to fix YARN-6616 in
> > branch-2.8 and branch-2.7.
> > - For YARN-6050, not sure either. Robert/Wangda, can you comment on
> > YARN-6050 compatibility?
> > - For YARN-7813, not sure why moving from 2.8.4/5 -> 2.8.6 would be
> > incompatible with this strategy? It should be OK to remove/add optional
> > fields (removing the field with id 12, and adding the field with id 13).
> > The difficulties I see here are, we would have to leave id 12 blank in
> > 2.8.6 (so we cannot have YARN-6164 in branch-2.8), and users on 2.8.4/5
> > would have to move to 2.8.6 before moving to 2.9+. But rolling upgrade
> > would still work IIUC.
> >
> > Jonathan Hung
> >
> >
> > On Tue, Sep 24, 2019 at 2:52 PM Eric Badger 
> > wrote:
> >
> >> *   For YARN-6616, for branch-2.8 and below, it was only committed to
> >> 2.7.8/2.8.6 which have not been released (as I understand). Perhaps we
> can
> >> revert YARN-6616 from branch-2.7 and branch-2.8.
> >>   - This seems reasonable. Since we haven't released anything, it should
> >> be no issue to change the 2.7/2.8 protobuf field to have the same value
> as
> >> 2.9+
> >>
> >> *   For YARN-6050, there's a bit here:
> >> https://developers.google.com/protocol-buffers/docs/proto that says
> >> "optional is compatible with repeated", so I think we should be OK
> there.
> >>   - Optional is compatible with repeatable over the wire such that
> >> protobuf won't blow up, but does that actually mean that it's
> compatible in
> >> this case? If it's expecting an optional and gets a repeated, it's
> going to
> >> drop everything except for the last value. I don't know enough about
> >> YARN-6050 to say if this will be ok or not.
> >>
> >> *   For YARN-7813, it's in 2.8.4 so it seems upgrading from 2.8.4 or
> >> 2.8.5 to a 2.9+ version will be an issue. One option could be to move
> the
> >> intraQueuePreemptionDisabled field from id 12 to id 13 in branch-2.8,
> then
> >> users would upgrade from 2.8.4/2.8.5 to 2.8.6 (someone would have to
> >> release this), then upgrade from 2.8.6 to 2.9+.
> >>   - I'm ok with this, but it should be noted that the upgrade from
> >> 2.8.4/2.8.5 to 2.8.6 (or 2.9+) would not be compatible for a rolling
> >> upgrade. So this would cause some pain to anybo

Re: Incompatible changes between branch-2.8 and branch-2.9

2019-09-24 Thread Eric Badger
*   For YARN-6616, for branch-2.8 and below, it was only committed to
2.7.8/2.8.6 which have not been released (as I understand). Perhaps we can
revert YARN-6616 from branch-2.7 and branch-2.8.
  - This seems reasonable. Since we haven't released anything, it should be
no issue to change the 2.7/2.8 protobuf field to have the same value as 2.9+

*   For YARN-6050, there's a bit here:
https://developers.google.com/protocol-buffers/docs/proto that says
"optional is compatible with repeated", so I think we should be OK there.
  - Optional is compatible with repeatable over the wire such that protobuf
won't blow up, but does that actually mean that it's compatible in this
case? If it's expecting an optional and gets a repeated, it's going to drop
everything except for the last value. I don't know enough about YARN-6050
to say if this will be ok or not.

*   For YARN-7813, it's in 2.8.4 so it seems upgrading from 2.8.4 or 2.8.5
to a 2.9+ version will be an issue. One option could be to move the
intraQueuePreemptionDisabled field from id 12 to id 13 in branch-2.8, then
users would upgrade from 2.8.4/2.8.5 to 2.8.6 (someone would have to
release this), then upgrade from 2.8.6 to 2.9+.
  - I'm ok with this, but it should be noted that the upgrade from
2.8.4/2.8.5 to 2.8.6 (or 2.9+) would not be compatible for a rolling
upgrade. So this would cause some pain to anybody with clusters on those
versions.

Eric

On Tue, Sep 24, 2019 at 2:42 PM Jonathan Hung  wrote:

> Sorry, let me edit my first point. We can just create addendums for
> YARN-6616 in branch-2.7 and branch-2.8 to edit the submitTime field to the
> correct id 28. We don’t need to revert YARN-6616 from these branches
> completely.
>
> Jonathan
>
> 
> From: Jonathan Hung 
> Sent: Tuesday, September 24, 2019 11:38 AM
> To: Eric Badger
> Cc: Hadoop Common; yarn-dev; mapreduce-dev; Hdfs-dev
> Subject: Re: Incompatible changes between branch-2.8 and branch-2.9
>
> Hi Eric, thanks for the investigation.
>
>   *   For YARN-6616, for branch-2.8 and below, it was only committed to
> 2.7.8/2.8.6 which have not been released (as I understand). Perhaps we can
> revert YARN-6616 from branch-2.7 and branch-2.8.
>   *   For YARN-6050, there's a bit here:
> https://developers.google.com/protocol-buffers/docs/proto that says
> "optional is compatible with repeated", so I think we should be OK there.
>   *   For YARN-7813, it's in 2.8.4 so it seems upgrading from 2.8.4 or
> 2.8.5 to a 2.9+ version will be an issue. One option could be to move the
> intraQueuePreemptionDisabled field from id 12 to id 13 in branch-2.8, then
> users would upgrade from 2.8.4/2.8.5 to 2.8.6 (someone would have to
> release this), then upgrade from 2.8.6 to 2.9+.
>
> Jonathan Hung
>
>
> On Tue, Sep 24, 2019 at 9:23 AM Eric Badger 
> wrote:
> We (Verizon Media) are currently moving towards upgrading our clusters from
> our internal fork of branch-2.8 to an internal fork of branch-2. During
> this process, we have found multiple incompatible changes in protobufs
> between branch-2.8 and branch-2. These incompatibilities were all
> introduced between branch-2.8 and branch-2.9. I did a git diff over all
> .proto files across the branch-2.8 and branch-2.9 and found 3 instances of
> incompatibilities from 3 separate commits. All of the incompatibilities are
> in yarn_protos.proto
>
>
> I would like to discuss how to fix these incompatible changes. Otherwise,
> rolling upgrades will not be supported between branch-2.8 (or below) and
> branch-2.9 (or beyond). We could revert the incompatible changes, but then
> the new releases would be incompatible with the releases that have these
> incompatible changes. If we do nothing, then rolling upgrades won't work
> between 2.8- and 2.9+.
>
>
> Thanks,
>
>
> Eric
>
>
> ---
>
>
> git diff branch-2.8..branch-2.9 $(find . -name '*\.proto')
>
>
> https://issues.apache.org/jira/browse/YARN-6616
>
>- Trunk patch (applied through branch-2.9) differs from branch-2.8 patch
>
> @@ -211,7 +245,20 @@ message ApplicationReportProto {
>
>optional PriorityProto priority = 23;
>
>optional string appNodeLabelExpression = 24;
>
>optional string amNodeLabelExpression = 25;
>
> -  optional int64 submitTime = 26;
>
> +  repeated AppTimeoutsMapProto appTimeouts = 26;
>
> +  optional int64 launchTime = 27;
>
> +  optional int64 submitTime = 28;
>
>
> https://issues.apache.org/jira/browse/YARN-6050
>
>- Trunk and branch-2 patches both change the protobuf type in the same
>way.
>
> @@ -356,7 +416,22 @@ message ApplicationSubmissionContextProto {
>
>optional LogAggregati

Incompatible changes between branch-2.8 and branch-2.9

2019-09-24 Thread Eric Badger
We (Verizon Media) are currently moving towards upgrading our clusters from
our internal fork of branch-2.8 to an internal fork of branch-2. During
this process, we have found multiple incompatible changes in protobufs
between branch-2.8 and branch-2. These incompatibilities were all
introduced between branch-2.8 and branch-2.9. I did a git diff over all
.proto files across the branch-2.8 and branch-2.9 and found 3 instances of
incompatibilities from 3 separate commits. All of the incompatibilities are
in yarn_protos.proto


I would like to discuss how to fix these incompatible changes. Otherwise,
rolling upgrades will not be supported between branch-2.8 (or below) and
branch-2.9 (or beyond). We could revert the incompatible changes, but then
the new releases would be incompatible with the releases that have these
incompatible changes. If we do nothing, then rolling upgrades won't work
between 2.8- and 2.9+.


Thanks,


Eric


---


git diff branch-2.8..branch-2.9 $(find . -name '*\.proto')


https://issues.apache.org/jira/browse/YARN-6616

   - Trunk patch (applied through branch-2.9) differs from branch-2.8 patch

@@ -211,7 +245,20 @@ message ApplicationReportProto {

   optional PriorityProto priority = 23;

   optional string appNodeLabelExpression = 24;

   optional string amNodeLabelExpression = 25;

-  optional int64 submitTime = 26;

+  repeated AppTimeoutsMapProto appTimeouts = 26;

+  optional int64 launchTime = 27;

+  optional int64 submitTime = 28;


https://issues.apache.org/jira/browse/YARN-6050

   - Trunk and branch-2 patches both change the protobuf type in the same
   way.

@@ -356,7 +416,22 @@ message ApplicationSubmissionContextProto {

   optional LogAggregationContextProto log_aggregation_context = 14;

   optional ReservationIdProto reservation_id = 15;

   optional string node_label_expression = 16;

-  optional ResourceRequestProto am_container_resource_request = 17;

+  repeated ResourceRequestProto am_container_resource_request = 17;

+  repeated ApplicationTimeoutMapProto application_timeouts = 18;


https://issues.apache.org/jira/browse/YARN-7813

   - Trunk (applied through branch-3.1) and branch-3.0 (applied through
   branch-2.9) patches differ from branch-2.8 patch

@@ -425,7 +501,21 @@ message QueueInfoProto {

   optional string defaultNodeLabelExpression = 9;

   optional QueueStatisticsProto queueStatistics = 10;

   optional bool preemptionDisabled = 11;

-  optional bool intraQueuePreemptionDisabled = 12;

+  repeated QueueConfigurationsMapProto queueConfigurationsMap = 12;

+  optional bool intraQueuePreemptionDisabled = 13;


Re: [VOTE] Mark 2.6, 2.7, 3.0 release lines EOL

2019-08-26 Thread Eric Badger
- Stuff has been going into branch-2 sporadically but I don't who is
actively
using that code other than as part of a cherrypick backwards strategy.

- Should we do a 2.10.x release? Or just say "time to upgrade?"

We have talked at a few different Hadoop contributors meetups and community
syncs and agreed that 2.10 should serve as a "bridge" release for 3.x so
that 2.x clients can talk to 3.x servers and vice versa without
compatibility issues. At the last meeting where we discussed this it seemed
that there were 3 major compatibility issues. The biggest one was in the
edit logs.

Verizon Media is currently working on upgrading to branch-2 from
branch-2.8. I believe LinkedIn is also interested in branch-2 and working
towards a 2.10 release.

Eric

On Mon, Aug 26, 2019 at 9:03 AM Steve Loughran 
wrote:

> On Sat, Aug 24, 2019 at 2:25 AM Wangda Tan  wrote:
>
> > Hi Steve,
> >
> > The proposal is to EOL for the following branches:
> >
> > [2.0.x - 2.7.x]
> > [3.0.x]
> >
> > 2.8.x, 2.9.x, 2.10.x (not released yet), 3.1.x (and later) are not EOL.
> >
>
> one final 2.8.x, 1+ for 2.9, and then we have to start thinking 2.10 as a
> "last ever branch- release".
>
> Stuff has been going into branch-2 sporadically but I don't who is actively
> using that code other than as part of a cherrypick backwards strategy.
>
> Should we do a 2.10.x release? Or just say "time to upgrade?"
>
>
>
> > Thoughts?
> >
> > On Sat, Aug 24, 2019 at 1:40 AM Steve Loughran 
> > wrote:
> >
> >>
> >>
> >> On Wed, Aug 21, 2019 at 4:03 AM Wangda Tan  wrote:
> >>
> >>> Hi all,
> >>>
> >>> This is a vote thread to mark any versions smaller than 2.7
> (inclusive),
> >>> and 3.0 EOL. This is based on discussions of [1]
> >>>
> >>
> >> 3.0 inclusive? i.e the non EOl ones being 2.8+ and 3.1+?
> >>
> >
>


Re: [VOTE] Mark 2.6, 2.7, 3.0 release lines EOL

2019-08-21 Thread Eric Badger
+1

On Wed, Aug 21, 2019 at 9:40 AM Sean Busbey 
wrote:

> +1
>
> On Tue, Aug 20, 2019 at 10:03 PM Wangda Tan  wrote:
> >
> > Hi all,
> >
> > This is a vote thread to mark any versions smaller than 2.7 (inclusive),
> > and 3.0 EOL. This is based on discussions of [1]
> >
> > This discussion runs for 7 days and will conclude on Aug 28 Wed.
> >
> > Please feel free to share your thoughts.
> >
> > Thanks,
> > Wangda
> >
> > [1]
> >
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201908.mbox/%3cCAD++eC=ou-tit1faob-dbecqe6ht7ede7t1dyra2p1yinpe...@mail.gmail.com%3e
> > ,
>
>
>
> --
> busbey
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [DISCUSS] Prefer Github PR Integration over patch in JIRA

2019-07-22 Thread Eric Badger
Where would JIRA fit into the PR workflow? Would we file JIRAs just to
track github PRs and have all of the discussion on the PR?

Eric

On Mon, Jul 22, 2019 at 1:10 PM Dinesh Chitlangia
 wrote:

> +1 Absolutely. It also makes it easy/clean for reviewers to leave specific
> comments and the authors can make incremental changes without the hassles
> of generating iterative patch files.
>
> Thanks,
> Dinesh
>
>
>
>
> On Mon, Jul 22, 2019 at 2:06 PM Wei-Chiu Chuang 
> wrote:
>
> > Historically, Hadoop developers create patches and attache them to JIRA,
> > andthen the Yetus bot runs precommit against the patch in the JIRA.
> >
> > The Github PR is more convenient for code review and less hassle for
> > committers to merge a commit. I am proposing for the community to prefer
> > Github PR over the traditional patch-in-jira. This doesn't mean we will
> > reject the traditional way, but we can move gradually to the new way.
> > Additionally, update the Hadoop "How to contribute" wiki, and advertise
> > that Github PR is the preferred method.
> >
> > Thoughts?
> >
>


Re: [DISCUSS] A unified and open Hadoop community sync up schedule?

2019-06-19 Thread Eric Badger
Is there a YARN call today (in 16 minutes)? I saw it on the calendar until
a few minutes ago.

Eric

On Tue, Jun 18, 2019 at 11:18 PM Wangda Tan  wrote:

> Thanks @Wei-Chiu Chuang  . updated gdoc
>
> On Tue, Jun 18, 2019 at 7:35 PM Wei-Chiu Chuang 
> wrote:
>
> > Thanks Wangda,
> >
> > I just like to make a correction -- the .ics calendar file says the first
> > Wednesday for HDFS/cloud connector is in Mandarin whereas on the gdoc is
> to
> > host it on the third Wednesday.
> >
> > On Tue, Jun 18, 2019 at 5:29 PM Wangda Tan  wrote:
> >
> > > Hi Folks,
> > >
> > > I just updated doc:
> > >
> > >
> >
> https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit#
> > > with
> > > dial-in information, notes, etc.
> > >
> > > Here's a calendar to subscribe:
> > >
> > >
> >
> https://calendar.google.com/calendar/ical/hadoop.community.sync.up%40gmail.com/public/basic.ics
> > >
> > > I'm thinking to give it a try from next week, any suggestions?
> > >
> > > Thanks,
> > > Wangda
> > >
> > > On Fri, Jun 14, 2019 at 4:02 PM Wangda Tan 
> wrote:
> > >
> > > > And please let me know if you can help with coordinate logistics
> stuff,
> > > > cross-checking, etc. Let's spend some time next week to get it
> > finalized.
> > > >
> > > > Thanks,
> > > > Wangda
> > > >
> > > > On Fri, Jun 14, 2019 at 4:00 PM Wangda Tan 
> > wrote:
> > > >
> > > >> Hi Folks,
> > > >>
> > > >> Yufei: Agree with all your opinions.
> > > >>
> > > >> Anu: it might be more efficient to use Google doc to track meeting
> > > >> minutes and we can put them together.
> > > >>
> > > >> I just put the proposal to
> > > >>
> > >
> >
> https://calendar.google.com/calendar/b/3?cid=aGFkb29wLmNvbW11bml0eS5zeW5jLnVwQGdtYWlsLmNvbQ
> > > ,
> > > >> you can check if the proposal time works or not. If you agree, we
> can
> > go
> > > >> ahead to add meeting link, google doc, etc.
> > > >>
> > > >> If you want to have edit permissions, please drop a private email to
> > me
> > > >> so I will add you.
> > > >>
> > > >> We still need more hosts, in each track, ideally we should have at
> > least
> > > >> 3 hosts per track just like HDFS blocks :), please volunteer, so we
> > can
> > > >> have enough members to run the meeting.
> > > >>
> > > >> Let's shoot by end of the next week, let's get all logistics done
> and
> > > >> starting community sync up series from the week of Jun 25th.
> > > >>
> > > >> Thanks,
> > > >> Wangda
> > > >>
> > > >> Thanks,
> > > >> Wangda
> > > >>
> > > >>
> > > >>
> > > >> On Tue, Jun 11, 2019 at 10:23 AM Anu Engineer <
> aengin...@cloudera.com
> > >
> > > >> wrote:
> > > >>
> > > >>> For Ozone, we have started using the Wiki itself as the agenda and
> > > after
> > > >>> the meeting is over, we convert it into the meeting notes.
> > > >>> Here is an example, the project owner can edit and maintain it, it
> is
> > > >>> like 10 mins work - and allows anyone to add stuff into the agenda
> > too.
> > > >>>
> > > >>>
> > > >>>
> > >
> >
> https://cwiki.apache.org/confluence/display/HADOOP/2019-06-10+Meeting+notes
> > > >>>
> > > >>> --Anu
> > > >>>
> > > >>> On Tue, Jun 11, 2019 at 10:20 AM Yufei Gu 
> > > wrote:
> > > >>>
> > >  +1 for this idea. Thanks Wangda for bringing this up.
> > > 
> > >  Some comments to share:
> > > 
> > > - Agenda needed to be posted ahead of meeting and welcome any
> > >  interested
> > > party to contribute to topics.
> > > - We should encourage more people to attend. That's whole point
> > of
> > >  the
> > > meeting.
> > > - Hopefully, this can mitigate the situation that some patches
> > are
> > > waiting for review for ever, which turns away new contributors.
> > > - 30m per session sounds a little bit short, we can try it out
> > and
> > >  see
> > > if extension is needed.
> > > 
> > >  Best,
> > > 
> > >  Yufei
> > > 
> > >  `This is not a contribution`
> > > 
> > > 
> > >  On Fri, Jun 7, 2019 at 4:39 PM Wangda Tan 
> > > wrote:
> > > 
> > >  > Hi Hadoop-devs,
> > >  >
> > >  > Previous we have regular YARN community sync up (1 hr, biweekly,
> > but
> > >  not
> > >  > open to public). Recently because of changes in our schedules,
> > Less
> > >  folks
> > >  > showed up in the sync up for the last several months.
> > >  >
> > >  > I saw the K8s community did a pretty good job to run their sig
> > >  meetings,
> > >  > there's regular meetings for different topics, notes, agenda,
> etc.
> > >  Such as
> > >  >
> > >  >
> > > 
> > >
> >
> https://docs.google.com/document/d/13mwye7nvrmV11q9_Eg77z-1w3X7Q1GTbslpml4J7F3A/edit
> > >  >
> > >  >
> > >  > For Hadoop community, there are less such regular meetings open
> to
> > > the
> > >  > public except for Ozone project and offline meetups or
> > >  Bird-of-Features in
> > >  > Hadoop/DataWorks Summit. Recently we have a 

ebugs JIRAs

2019-05-06 Thread Eric Badger
It seems as though ebugs is running a static code analyzer across all of
hadoop and creating a new JIRA for every issue it finds (15 JIRAs in the
past 3 hours). While this could potentially be useful, I don't think it's a
good idea at all to file a new JIRA for every single issue that is found.
It would be better to file 1 issue for every type of problem that it finds
or 1 issue per project (HDFS, YARN, etc.).

I have a feeling that we're going to either end up closing a bunch of these
as invalid or ignoring them all, so I figured we might as well talk about
this earlier rather than later.

Eric

Examples:
https://issues.apache.org/jira/browse/YARN-9534
https://issues.apache.org/jira/browse/HDFS-14467


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-09 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on RHEL 7
- Deployed a single node pseudo cluster
- Ran some example jobs
- Verified the Docker environment works with non-entrypoint mode

Eric

On Tue, Jan 8, 2019 at 5:42 AM Sunil G  wrote:

> Hi folks,
>
>
> Thanks to all of you who helped in this release [1] and for helping to vote
> for RC0. I have created second release candidate (RC1) for Apache Hadoop
> 3.2.0.
>
>
> Artifacts for this RC are available here:
>
> http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
>
>
> RC tag in git is release-3.2.0-RC1.
>
>
>
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1178/
>
>
> This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm PST.
>
>
>
> 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> additions
>
> are the highlights of this release.
>
> 1. Node Attributes Support in YARN
>
> 2. Hadoop Submarine project for running Deep Learning workloads on YARN
>
> 3. Support service upgrade via YARN Service API and CLI
>
> 4. HDFS Storage Policy Satisfier
>
> 5. Support Windows Azure Storage - Blob file system in Hadoop
>
> 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
>
> 7. Improvements in Router-based HDFS federation
>
>
>
> Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
>
> I have done few testing with my pseudo cluster. My +1 to start.
>
>
>
> Regards,
>
> Sunil
>
>
>
> [1]
>
>
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
>
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER BY fixVersion ASC
>


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC0

2018-11-27 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on macOS 10.14.1, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Eric

On Tue, Nov 27, 2018 at 7:02 AM Ayush Saxena  wrote:

> Sunil, Thanks for driving this release!!!
>
> +1 (non-binding)
> -Built from source on Ubuntu 17.04 and JDK 8
> -Ran basic Hdfs Commands
> -Ran basic EC Commands
> -Ran basic RBF commands
> -Browsed HDFS and RBF WEB UI
> -Ran TeraGen/TeraSort
>
> Regards
> Ayush
>
> > On 27-Nov-2018, at 4:06 PM, Kitti Nanasi 
> wrote:
> >
> > Thanks Sunil for the driving the release!
> >
> > +1 (non-binding)
> >
> > - checked out git tag release-3.2.0-RC0
> > - built from source on Mac OS X 10.13.4, java version 8.0.172-zulu
> > - deployed on a 5 node cluster
> > - ran terasort, teragen, teravalidate with success
> > - executed basic hdfs, dfsadmin and ec commands
> >
> > Best,
> > Kitti
> >
> >> On Fri, Nov 23, 2018 at 1:07 PM Sunil G  wrote:
> >>
> >> Hi folks,
> >>
> >>
> >>
> >> Thanks to all contributors who helped in this release [1]. I have
> created
> >>
> >> first release candidate (RC0) for Apache Hadoop 3.2.0.
> >>
> >>
> >> Artifacts for this RC are available here:
> >>
> >> http://home.apache.org/~sunilg/hadoop-3.2.0-RC0/
> >>
> >>
> >>
> >> RC tag in git is release-3.2.0-RC0.
> >>
> >>
> >>
> >> The maven artifacts are available via repository.apache.org at
> >>
> >>
> https://repository.apache.org/content/repositories/orgapachehadoop-1174/
> >>
> >>
> >> This vote will run 7 days (5 weekdays), ending on Nov 30 at 11:59 pm
> PST.
> >>
> >>
> >>
> >> 3.2.0 contains 1079 [2] fixed JIRA issues since 3.1.0. Below feature
> >> additions
> >>
> >> are the highlights of this release.
> >>
> >> 1. Node Attributes Support in YARN
> >>
> >> 2. Hadoop Submarine project for running Deep Learning workloads on YARN
> >>
> >> 3. Support service upgrade via YARN Service API and CLI
> >>
> >> 4. HDFS Storage Policy Satisfier
> >>
> >> 5. Support Windows Azure Storage - Blob file system in Hadoop
> >>
> >> 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
> >>
> >> 7. Improvements in Router-based HDFS federation
> >>
> >>
> >>
> >> Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
> >>
> >> I have done few testing with my pseudo cluster. My +1 to start.
> >>
> >>
> >>
> >> Regards,
> >>
> >> Sunil
> >>
> >>
> >>
> >> [1]
> >>
> >>
> >>
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
> >>
> >> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> >> AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> >> ORDER BY fixVersion ASC
> >>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 2.8.5 (RC0)

2018-09-11 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on macOS 10.13.6, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Eric

On Tue, Sep 11, 2018 at 1:39 PM, Gabor Bota  wrote:

>   Thanks for the work Junping!
>
>   +1 (non-binding)
>
> - checked out git tag release-2.8.5-RC0
> - built from source on Mac OS X 10.13.6, java version 8.0.181-oracle
> - deployed on a 3 node cluster
> - verified pi job (yarn), teragen, terasort and teravalidate
>
>   Regards,
>   Gabor Bota
>
> On Tue, Sep 11, 2018 at 6:31 PM Eric Payne  invalid>
> wrote:
>
> > Thanks a lot Junping!
> >
> > +1 (binding)
> >
> > Tested the following:
> > - Built from source
> > - Installed on a 7 node, multi-tenant, insecure pseudo cluster, running
> > YARN capacity scheduler
> > - Added a queue via refresh
> > - Verified various GUI pages
> > - Streaming jobs
> > - Cross-queue (Inter) preemption
> > - In-queue (Intra) preemption
> > - Teragen / terasort
> >
> >
> > -Eric
> >
> >
> >
> >
> > On Monday, September 10, 2018, 7:01:46 AM CDT, 俊平堵 <
> junping...@apache.org>
> > wrote:
> >
> >
> >
> >
> >
> > Hi all,
> >
> > I've created the first release candidate (RC0) for Apache
> > Hadoop 2.8.5. This is our next point release to follow up 2.8.4. It
> > includes 33 important fixes and improvements.
> >
> >
> > The RC artifacts are available at:
> > http://home.apache.org/~junping_du/hadoop-2.8.5-RC0
> >
> >
> > The RC tag in git is: release-2.8.5-RC0
> >
> >
> >
> > The maven artifacts are available via repository.apache.org<
> > http://repository.apache.org> at:
> >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1140
> >
> >
> > Please try the release and vote; the vote will run for the usual 5
> > working
> > days, ending on 9/15/2018 PST time.
> >
> >
> > Thanks,
> >
> >
> > Junping
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


Re: [VOTE] Release Apache Hadoop 3.1.1 - RC0

2018-08-07 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on macOS 10.13.6, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs


On Tue, Aug 7, 2018 at 8:46 AM, Shashikant Banerjee <
sbaner...@hortonworks.com> wrote:

> +1(non-binding)
>
> * checked out git tag release-3.1.1-RC0
> * built from source
> * deployed on a single node cluster
> * executed basic dfs commands
> * executed some snapshot commands
>
> Thank you very much the work Wangda.
>
> Thanks
> Shashi
>
> On 8/7/18, 6:43 PM, "Elek, Marton"  wrote:
>
>
> +1 (non-binding)
>
> 1. Built from the source package.
> 2. Checked the signature
> 3. Started docker based pseudo cluster and smoketested some basic
> functionality (hdfs cli, ec cli, viewfs, yarn examples, spark word
> count
> job)
>
> Thank you very much the work Wangda.
> Marton
>
>
> On 08/02/2018 08:43 PM, Wangda Tan wrote:
> > Hi folks,
> >
> > I've created RC0 for Apache Hadoop 3.1.1. The artifacts are
> available here:
> >
> > http://people.apache.org/~wangda/hadoop-3.1.1-RC0/
> >
> > The RC tag in git is release-3.1.1-RC0:
> > https://github.com/apache/hadoop/commits/release-3.1.1-RC0
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/
> orgapachehadoop-1139/
> >
> > You can find my public key at
> > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
> >
> > This vote will run 5 days from now.
> >
> > 3.1.1 contains 435 [1] fixed JIRA issues since 3.1.0.
> >
> > I have done testing with a pseudo cluster and distributed shell job.
> My +1
> > to start.
> >
> > Best,
> > Wangda Tan
> >
> > [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> (3.1.1)
> > ORDER BY priority DESC
> >
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 2.7.7

2018-07-16 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on macOS 10.13.5, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs


On Mon, Jul 16, 2018 at 2:18 PM, Kihwal Lee  wrote:

> +1 (binding)
> -Built from the source
> -Brought up a single node pseudo-distributed cluster
> -Ran basic tests
> -Verified basic hdfs and dfsadmin commands working
>
>
> On Wed, Jul 11, 2018 at 10:05 AM Steve Loughran 
> wrote:
>
> >
> >
> > Hi
> >
> > I've got RC0 of Hadoop 2.7.7 up for people to download and play with
> >
> > http://people.apache.org/~stevel/hadoop-2.7.7/RC0/
> >
> > Nexus artifacts 2.7.7 are up in staging
> > https://repository.apache.org/content/repositories/orgapachehadoop-1135
> >
> > Git (signed) tag release-2.7.7-RC0, checksum
> > c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac
> >
> > My current GPG key is 38237EE425050285077DB57AD22CF846DBB162A0
> > you can download this direct (and it MUST be direct) from the ASF HTTPS
> > site
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > Please try the release and vote. The vote will run for 5 days.
> >
> > Thanks
> >
> > -Steve
> >
> > (I should add: this is my first ever attempt at a Hadoop release, please
> > be (a) rigorous and (b) forgiving. Credit to Wei-Chiu Chuang for his
> > advice).
> >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
>


Re: Apache Hadoop 3.0.3 Release plan

2018-05-23 Thread Eric Badger
My thinking is to cut the branch in next couple of days and create RC for
vote at the end of month.
  >  We plan to cut branch-3.0.3 by the coming Wednesday (May 9th) and vote
for RC on May 30th
  I much prefer to wait to cut the branch until just before the production
of the release and the vote. With so many branches, we sometimes miss
putting critical bug fixes in unreleased branches if thebranch is cut
too early.

Echoing Eric Payne, I think we should wait to cut the branch until we are
actually creating the RC to vote on (i.e. on May 29 or 30 if the vote is to
be on May 30).

Eric



On Wed, May 23, 2018 at 4:11 PM, Yongjun Zhang  wrote:

> Hi,
>
> I have gardened the jiras for 3.0.3, and have the following open issues:
>
> https://issues.apache.org/jira/issues/?filter=12343970
>
> Two of them are blockers, one of them (YARN-8346) has already got +1 for
> patch, the other (YARN-8108) will take longer time to resolve and it seems
> we can possibly push it to next release given 3.0.2 also has the issue.
>
> My thinking is to cut the branch in next couple of days and create RC for
> vote at the end of month.
>
> Comments are welcome.
>
> Thanks,
>
> --Yongjun
>
> On Tue, May 8, 2018 at 11:40 AM, Vrushali C 
> wrote:
>
> > +1 for including the YARN-7190 patch in 3.0.3 release. This is a fix that
> > will enable HBase to use Hadoop 3.0.x in the production line.
> >
> > thanks
> > Vrushali
> >
> >
> > On Tue, May 8, 2018 at 10:24 AM, Yongjun Zhang 
> > wrote:
> >
> >> Thanks Wei-Chiu and Haibo for the feedback!
> >>
> >> Good thing is that I have made the following note couple of days ago
> when
> >> I
> >> looked the at branch diff, so we are on the same page:
> >>
> >>  496dc57 Revert "YARN-7190. Ensure only NM classpath in 2.x gets
> TSv2
> >> related hbase jars, not the user classpath. Contributed by Varun
> Saxena."
> >>
> >> *YARN-7190 is not in 3.0.2,  I will include it in 3.0.3 per* the comment
> >> below:
> >> https://issues.apache.org/jira/browse/YARN-7190?focusedComme
> >> ntId=16457649&
> >> page=com.atlassian.jira.plugin.system.issuetabpanels
> >>  focusedCommentId=16457649=com.atlassian.jira.
> plugin.system.issuetabpanels>
> >> :
> >> comment-tabpanel#comment-16457649
> >>
> >>
> >> In addition, I will revert   https://issues.apache.org/
> >> jira/browse/HADOOP-13055 from 3.0.3 since it's a feature.
> >>
> >>
> >> Best,
> >>
> >> --Yongjun
> >>
> >> On Tue, May 8, 2018 at 8:57 AM, Haibo Chen 
> >> wrote:
> >>
> >> > +1 on adding YARN-7190 to Hadoop 3.0.x despite the fact that it is
> >> > technically incompatible.
> >> > It is critical enough to justify being an exception, IMO.
> >> >
> >> > Added Rohith and Vrushali
> >> >
> >> > On Tue, May 8, 2018 at 6:20 AM, Wei-Chiu Chuang 
> >> > wrote:
> >> >
> >> >> Thanks Yongjun for driving 3.0.3 release!
> >> >>
> >> >> IMHO, could we consider adding YARN-7190
> >> >>  into the list?
> >> >> I understand that it is listed as an incompatible change, however,
> >> because
> >> >> of this bug, HBase considers the entire Hadoop 3.0.x line not
> >> production
> >> >> ready. I feel there's not much point releasing any more 3.0.x
> releases
> >> if
> >> >> downstream projects can't pick it up (after the fact that HBase is
> one
> >> of
> >> >> the most important projects around Hadoop).
> >> >>
> >> >> On Mon, May 7, 2018 at 1:19 PM, Yongjun Zhang 
> >> >> wrote:
> >> >>
> >> >> > Hi Eric,
> >> >> >
> >> >> > Thanks for the feedback, good point. I will try to clean up things,
> >> then
> >> >> > cut branch before the release production and vote.
> >> >> >
> >> >> > Best,
> >> >> >
> >> >> > --Yongjun
> >> >> >
> >> >> > On Mon, May 7, 2018 at 8:39 AM, Eric Payne <
> eric.payne1...@yahoo.com
> >> .
> >> >> > invalid
> >> >> > > wrote:
> >> >> >
> >> >> > > >  We plan to cut branch-3.0.3 by the coming Wednesday (May 9th)
> >> and
> >> >> vote
> >> >> > > for RC on May 30th
> >> >> > > I much prefer to wait to cut the branch until just before the
> >> >> production
> >> >> > > of the release and the vote. With so many branches, we sometimes
> >> miss
> >> >> > > putting critical bug fixes in unreleased branches if the branch
> is
> >> cut
> >> >> > too
> >> >> > > early.
> >> >> > >
> >> >> > > My 2 cents...
> >> >> > > Thanks,
> >> >> > > -Eric Payne
> >> >> > >
> >> >> > >
> >> >> > >
> >> >> > >
> >> >> > >
> >> >> > > On Monday, May 7, 2018, 12:09:00 AM CDT, Yongjun Zhang <
> >> >> > > yjzhan...@apache.org> wrote:
> >> >> > >
> >> >> > >
> >> >> > >
> >> >> > >
> >> >> > >
> >> >> > > Hi All,
> >> >> > >
> >> >> > > >
> >> >> > > We have released Apache Hadoop 3.0.2 in April of this year [1].
> >> Since
> >> >> > then,
> >> >> > > there are quite some commits done to branch-3.0. To further
> improve
> >> >> the
> >> 

Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

2018-04-27 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on macOS 10.13.4, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Eric

On Thu, Apr 26, 2018 at 11:16 PM, Takanobu Asanuma 
wrote:

> Thanks for working on this, Sammi!
>
> +1 (non-binding)
>- Verified checksums
>- Succeeded "mvn clean package -Pdist,native -Dtar -DskipTests"
>- Started hadoop cluster with 1 master and 5 slaves
>- Run TeraGen/TeraSort
>- Verified some hdfs operations
>- Verified Web UI (NameNode, ResourceManager(classic and V2),
> JobHistory, Timeline)
>
> Thanks,
> -Takanobu
>
> > -Original Message-
> > From: Jinhu Wu [mailto:jinhu.wu@gmail.com]
> > Sent: Friday, April 27, 2018 12:39 PM
> > To: Gabor Bota 
> > Cc: Chen, Sammi ; junping...@apache.org; Hadoop
> > Common ; Rushabh Shah ;
> > hdfs-dev ; mapreduce-...@hadoop.apache.org;
> > yarn-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> >
> > Thanks Sammi for driving the release work!
> >
> > +1 (non-binding)
> >
> > based on following verification work:
> > - built succeed from source on Mac OSX 10.13.4, java version "1.8.0_151"
> > - run hadoop-aliyun tests successfully on cn-shanghai endpoint
> > - deployed a one node cluster and verified PI job
> > - verfied word-count job by using hadoop-aliyun as storage.
> >
> > Thanks,
> > jinhu
> >
> > On Fri, Apr 27, 2018 at 12:45 AM, Gabor Bota 
> > wrote:
> >
> > >   Thanks for the work Sammi!
> > >
> > >   +1 (non-binding)
> > >
> > >-   checked out git tag release-2.9.1-RC0
> > >-   S3A unit (mvn test) and integration (mvn verify) test run were
> > >successful on us-west-2
> > >-   built from source on Mac OS X 10.13.4, openjdk 1.8.0_144 (zulu)
> > >-   deployed on a 3 node cluster
> > >-   verified pi job, teragen, terasort and teravalidate
> > >
> > >
> > >   Regards,
> > >   Gabor Bota
> > >
> > > On Wed, Apr 25, 2018 at 7:12 AM, Chen, Sammi 
> wrote:
> > >
> > > >
> > > > Paste the links here,
> > > >
> > > > The artifacts are available here:  https://dist.apache.org/repos/
> > > > dist/dev/hadoop/2.9.1-RC0/
> > > >
> > > > The RC tag in git is release-2.9.1-RC0. Last git commit SHA is
> > > > e30710aea4e6e55e69372929106cf119af06fd0e.
> > > >
> > > > The maven artifacts are available at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1
> > > > 115/
> > > >
> > > > My public key is available from:
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > >
> > > > Bests,
> > > > Sammi
> > > > -Original Message-
> > > > From: Chen, Sammi [mailto:sammi.c...@intel.com]
> > > > Sent: Wednesday, April 25, 2018 12:02 PM
> > > > To: junping...@apache.org
> > > > Cc: Hadoop Common ; Rushabh Shah <
> > > > rusha...@oath.com>; hdfs-dev ;
> > > > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > > > Subject: RE: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> > > >
> > > >
> > > > Thanks Jason Lowe for the quick investigation to find out that the
> > > > test failures belong to the test only.
> > > >
> > > > Based on the current facts, I would like to continue calling the
> > > > VOTE for
> > > > 2.9.1 RC0,  and extend the vote deadline to end of this week 4/27.
> > > >
> > > >
> > > > I will add following note to the final release notes,
> > > >
> > > > HADOOP-15385
> > Test
> > > > case failures in Hadoop-distcp project doesn’t impact the distcp
> > > > function in 2.9.1
> > > >
> > > >
> > > > Bests,
> > > > Sammi
> > > > From: 俊平堵 [mailto:junping...@apache.org]
> > > > Sent: Tuesday, April 24, 2018 11:50 PM
> > > > To: Chen, Sammi 
> > > > Cc: Hadoop Common ; Rushabh Shah <
> > > > rusha...@oath.com>; hdfs-dev ;
> > > > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > > > Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> > > >
> > > > Thanks for reporting the issue, Rushabh! Actually, we found that
> > > > these test failures belong to test issues but not production issue,
> > > > so not
> > > really
> > > > a solid blocker for release. Anyway, I will let RM of 2.9.1 to
> > > > decide if
> > > to
> > > > cancel RC or not for this test issue.
> > > >
> > > > Thanks,
> > > >
> > > > Junping
> > > >
> > > >
> > > > Chen, Sammi >
> > > 于2018年4月24日
> > > > 周二下午7:50写道:
> > > > Hi Rushabh,
> > > >
> > > > Thanks for reporting the issue.  I will upload a new RC candidate
> > > > soon after the test failing issue is resolved.
> > > >
> > > >
> > > > Bests,
> > > > Sammi Chen
> > > > From: Rushabh 

Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-12 Thread Eric Badger
Thanks, Konstantin. Everything looks good to me

+1 (non-binding)

- Verified all signatures and digests
- Built from source on macOS 10.12.6, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Eric

On Tue, Dec 12, 2017 at 11:01 AM, Jason Lowe  wrote:

> Thanks for driving the release, Konstantin!
>
> +1 (binding)
>
> - Verified signatures and digests
> - Successfully performed a native build from source
> - Deployed a single-node cluster
> - Ran some sample jobs and checked the logs
>
> Jason
>
>
> On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko 
> wrote:
>
> > Hi everybody,
> >
> > I updated CHANGES.txt and fixed documentation links.
> > Also committed  MAPREDUCE-6165, which fixes a consistently failing test.
> >
> > This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
> > previous one 2.7.4 was release August 4, 2017.
> > Release 2.7.5 includes critical bug fixes and optimizations. See more
> > details in Release Note:
> > http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html
> >
> > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/
> >
> > Please give it a try and vote on this thread. The vote will run for 5
> days
> > ending 12/13/2017.
> >
> > My up to date public key is available from:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > Thanks,
> > --Konstantin
> >
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-12 Thread Eric Badger
Thanks, Junping

+1 (non-binding) looks good from my end

- Verified all hashes and checksums
- Built from source on macOS 10.12.6, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Eric

On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko 
wrote:

> Downloaded again, now the checksums look good. Sorry my fault
>
> Thanks,
> --Konstantin
>
> On Mon, Dec 11, 2017 at 5:03 PM, Junping Du  wrote:
>
> > Hi Konstantin,
> >
> >  Thanks for verification and comments. I was verifying your example
> > below but found it is actually matched:
> >
> >
> > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz*
> > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) =
> > e53d04477b85e8b58ac0a26468f04736*
> >
> > What's your md5 checksum for given source tar ball?
> >
> >
> > Thanks,
> >
> >
> > Junping
> >
> >
> > --
> > *From:* Konstantin Shvachko 
> > *Sent:* Saturday, December 9, 2017 11:06 AM
> > *To:* Junping Du
> > *Cc:* common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
> >
> > Hey Junping,
> >
> > Could you pls upload mds relative to the tar.gz etc. files rather than
> > their full path
> > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz:
> >MD5 = E5 3D 04 47 7B 85 E8 B5  8A C0 A2 64 68 F0 47 36
> >
> > Otherwise mds don't match for me.
> >
> > Thanks,
> > --Konstantin
> >
> > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du  wrote:
> >
> >> Hi all,
> >>  I've created the first release candidate (RC0) for Apache Hadoop
> >> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> >> important fixes and improvements.
> >>
> >>   The RC artifacts are available at: http://home.apache.org/~junpin
> >> g_du/hadoop-2.8.3-RC0
> >>
> >>   The RC tag in git is: release-2.8.3-RC0
> >>
> >>   The maven artifacts are available via repository.apache.org at:
> >> https://repository.apache.org/content/repositories/orgapachehadoop-1072
> >>
> >>   Please try the release and vote; the vote will run for the usual 5
> >> working days, ending on 12/12/2017 PST time.
> >>
> >> Thanks,
> >>
> >> Junping
> >>
> >
> >
>


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-15 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on macOS 10.12.6, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Thanks,

Eric

On Wed, Nov 15, 2017 at 10:24 AM, Carlo Aldo Curino 
wrote:

> +1 (binding)
>
> On Nov 15, 2017 8:23 AM, "Mukul Kumar Singh" 
> wrote:
>
> > +1 (non-binding)
> >
> > I built from source on Mac OS X 10.13.1 Java 1.8.0_111
> >
> > - Deployed on a single node cluster.
> > - Deployed a ViewFS cluster with two hdfs mount points.
> > - Performed basic sanity checks.
> > - Performed DFS operations(put, ls, mkdir, touch)
> >
> > Thanks,
> > Mukul
> >
> >
> > > On 14-Nov-2017, at 5:40 AM, Arun Suresh  wrote:
> > >
> > > Hi Folks,
> > >
> > > Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be
> > the
> > > starting release for Apache Hadoop 2.9.x line - it includes 30 New
> > Features
> > > with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues
> > since
> > > 2.8.2.
> > >
> > > More information about the 2.9.0 release plan can be found here:
> > > *https://cwiki.apache.org/confluence/display/HADOOP/
> > Roadmap#Roadmap-Version2.9
> > >  > Roadmap#Roadmap-Version2.9>*
> > >
> > > New RC is available at: *https://home.apache.org/~
> > asuresh/hadoop-2.9.0-RC3/
> > > *
> > >
> > > The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
> > > 756ebc8394e473ac25feac05fa493f6d612e6c50.
> > >
> > > The maven artifacts are available via repository.apache.org at:
> > >  > apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066=D&
> > sntz=1=AFQjCNFcern4uingMV_sEreko_zeLlgdlg>*https://
> > repository.apache.org/content/repositories/orgapachehadoop-1068/
> > >  > orgapachehadoop-1068/>*
> > >
> > > We are carrying over the votes from the previous RC given that the
> delta
> > is
> > > the license fix.
> > >
> > > Given the above - we are also going to stick with the original deadline
> > for
> > > the vote : ending on Friday 17th November 2017 2pm PT time.
> > >
> > > Thanks,
> > > -Arun/Subru
> >
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >
> >
>


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC0)

2017-11-07 Thread Eric Badger
+1 (non-binding) pending the issue that Sunil/Rohith pointed out

- Verified all hashes and checksums
- Built from source on macOS 10.12.6, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Thanks,

Eric

On Tue, Nov 7, 2017 at 4:03 PM, Wangda Tan  wrote:

> Sunil / Rohith,
>
> Could you check if your configs are same as Jonathan posted configs?
> https://issues.apache.org/jira/browse/YARN-7453?focusedCommentId=16242693;
> page=com.atlassian.jira.plugin.system.issuetabpanels:
> comment-tabpanel#comment-16242693
>
> And could you try if using Jonathan's configs can still reproduce the
> issue?
>
> Thanks,
> Wangda
>
>
> On Tue, Nov 7, 2017 at 1:52 PM, Arun Suresh  wrote:
>
> > Thanks for testing Rohith and Sunil
> >
> > Can you please confirm if it is not a config issue at your end ?
> > We (both Jonathan and myself) just tried testing this on a fresh cluster
> > (both automatic and manual) and we are not able to reproduce this. I've
> > updated the YARN-7453 
> > JIRA
> > with details of testing.
> >
> > Cheers
> > -Arun/Subru
> >
> > On Tue, Nov 7, 2017 at 3:17 AM, Rohith Sharma K S <
> > rohithsharm...@apache.org
> > > wrote:
> >
> > > Thanks Sunil for confirmation. Btw, I have raised YARN-7453
> > >  JIRA to track this
> > > issue.
> > >
> > > - Rohith Sharma K S
> > >
> > > On 7 November 2017 at 16:44, Sunil G  wrote:
> > >
> > >> Hi Subru and Arun.
> > >>
> > >> Thanks for driving 2.9 release. Great work!
> > >>
> > >> I installed cluster built from source.
> > >> - Ran few MR jobs with application priority enabled. Runs fine.
> > >> - Accessed new UI and it also seems fine.
> > >>
> > >> However I am also getting same issue as Rohith reported.
> > >> - Started an HA cluster
> > >> - Pushed RM to standby
> > >> - Pushed back RM to active then seeing an exception.
> > >>
> > >> org.apache.hadoop.ha.ServiceFailedException: RM could not transition
> to
> > >> Active
> > >> at
> > >> org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyE
> > >> lectorBasedElectorServic
> > >> e.becomeActive(ActiveStandbyElectorBasedElectorService.java:146)
> > >> at
> > >> org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(Activ
> > >> eStandbyElector.java:894
> > >> )
> > >>
> > >> Caused by: org.apache.zookeeper.KeeperException$NoAuthException:
> > >> KeeperErrorCode = NoAuth
> > >> at
> > >> org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
> > >> at org.apache.zookeeper.ZooKeeper.multiInternal(
> ZooKeeper.java:
> > >> 949)
> > >>
> > >> Will check and post more details,
> > >>
> > >> - Sunil
> > >>
> > >>
> > >> On Tue, Nov 7, 2017 at 12:47 PM Rohith Sharma K S <
> > >> rohithsharm...@apache.org>
> > >> wrote:
> > >>
> > >> > Thanks Subru/Arun for the great work!
> > >> >
> > >> > Downloaded source and built from it. Deployed RM HA non-secured
> > cluster
> > >> > along with new YARN UI and ATSv2.
> > >> >
> > >> > I am facing basic RM HA switch issue after first time successful
> > start.
> > >> > *Can
> > >> > anyone else is facing this issue?*
> > >> >
> > >> > When RM is switched from ACTIVE to STANDBY to ACTIVE, RM never
> switch
> > to
> > >> > active successfully. Exception trace I see from the log is
> > >> >
> > >> > 2017-11-07 12:35:56,540 WARN org.apache.hadoop.ha.
> > ActiveStandbyElector:
> > >> > Exception handling the winning of election
> > >> > org.apache.hadoop.ha.ServiceFailedException: RM could not
> transition
> > to
> > >> > Active
> > >> > at
> > >> >
> > >> > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyE
> > >> lectorBasedElectorService.becomeActive(ActiveStandbyElec
> > >> torBasedElectorService.java:146)
> > >> > at
> > >> >
> > >> > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(Activ
> > >> eStandbyElector.java:894)
> > >> > at
> > >> >
> > >> > org.apache.hadoop.ha.ActiveStandbyElector.processResult(Acti
> > >> veStandbyElector.java:473)
> > >> > at
> > >> >
> > >> > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(
> > >> ClientCnxn.java:599)
> > >> > at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.
> > >> java:498)
> > >> > Caused by: org.apache.hadoop.ha.ServiceFailedException: Error when
> > >> > transitioning to Active mode
> > >> > at
> > >> >
> > >> > org.apache.hadoop.yarn.server.resourcemanager.AdminService.t
> > >> ransitionToActive(AdminService.java:325)
> > >> > at
> > >> >
> > >> > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyE
> > >> lectorBasedElectorService.becomeActive(ActiveStandbyElec
> > >> torBasedElectorService.java:144)
> > >> > ... 4 more
> > >> > Caused by: org.apache.hadoop.service.ServiceStateException:
> > >> > org.apache.zookeeper.KeeperException$NoAuthException:
> > KeeperErrorCode =
> > >> > NoAuth
> > >> > at
> > >> >
> > 

Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-24 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on macOS 10.12.6, Java 1.8.0u65
- Deployed a pseudo cluster
- Ran some example jobs

Thanks,

Eric

On Tue, Oct 24, 2017 at 12:59 AM, Mukul Kumar Singh 
wrote:

> Thanks Junping,
>
> +1 (non-binding)
>
> I built from source on Mac OS X 10.12.6 Java 1.8.0_111
>
> - Deployed on a single node cluster.
> - Deployed a ViewFS cluster with two hdfs mount points.
> - Performed basic sanity checks.
> - Performed basic DFS operations.
>
> Thanks,
> Mukul
>
>
>
>
>
>
> On 20/10/17, 6:12 AM, "Junping Du"  wrote:
>
> >Hi folks,
> > I've created our new release candidate (RC1) for Apache Hadoop 2.8.2.
> >
> > Apache Hadoop 2.8.2 is the first stable release of Hadoop 2.8 line
> and will be the latest stable/production release for Apache Hadoop - it
> includes 315 new fixed issues since 2.8.1 and 69 fixes are marked as
> blocker/critical issues.
> >
> >  More information about the 2.8.2 release plan can be found here:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
> >
> >  New RC is available at: http://home.apache.org/~
> junping_du/hadoop-2.8.2-RC1 du/hadoop-2.8.2-RC0>
> >
> >  The RC tag in git is: release-2.8.2-RC1, and the latest commit id
> is: 66c47f2a01ad9637879e95f80c41f798373828fb
> >
> >  The maven artifacts are available via repository.apache.org repository.apache.org/> at: https://repository.apache.org/
> content/repositories/orgapachehadoop-1064 repository.apache.org/content/repositories/orgapachehadoop-1062>
> >
> >  Please try the release and vote; the vote will run for the usual 5
> days, ending on 10/24/2017 6pm PST time.
> >
> >Thanks,
> >
> >Junping
> >
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 3.0.0-beta1 RC0

2017-10-03 Thread Eric Badger
+1 (non-binding)

- Verified all checksums and signatures
- Built native from source on macOS 10.12.6 and RHEL 7.1
- Deployed a single node pseudo cluster
- Ran pi and sleep jobs
- Verified Docker was marked as experimental

Thanks,

Eric

On Tue, Oct 3, 2017 at 1:41 PM, John Zhuge  wrote:

> +1 (binding)
>
>- Verified checksums and signatures of all tarballs
>- Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
>- Verified cloud connectors:
>   - All S3A integration tests
>   - All ADL live unit tests
>- Deployed both binary and built source to a pseudo cluster, passed the
>following sanity tests in insecure, SSL, and SSL+Kerberos mode:
>   - HDFS basic and ACL
>   - DistCp basic
>   - MapReduce wordcount (only failed in SSL+Kerberos mode for binary
>   tarball, probably unrelated)
>   - KMS and HttpFS basic
>   - Balancer start/stop
>
> Hit the following errors but they don't seem to be blocking:
>
> == Missing dependencies during build ==
>
> > ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> > ERROR: hadoop-azure has missing dependencies: jetty-util-ajax-9.3.19.
> > v20170502.jar
> > ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> > ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
>
>
> Filed HADOOP-14923, HADOOP-14924, and HADOOP-14925.
>
> == Unit tests failed in Kerberos+SSL mode for KMS and HttpFs default HTTP
> servlet /conf, /stacks, and /logLevel ==
>
> One example below:
>
> >Connecting to
> > https://localhost:14000/logLevel?log=org.apache.hadoop.fs.http.server.
> HttpFSServer
> >Exception in thread "main"
> > org.apache.hadoop.security.authentication.client.
> AuthenticationException:
> > Authentication failed, URL:
> > https://localhost:14000/logLevel?log=org.apache.hadoop.fs.http.server.
> HttpFSServer=jzhuge,
> > status: 403, message: GSSException: Failure unspecified at GSS-API level
> > (Mechanism level: Request is a replay (34))
>
>
> The /logLevel failure will affect command "hadoop daemonlog".
>
>
> On Tue, Oct 3, 2017 at 10:56 AM, Andrew Wang 
> wrote:
>
> > Thanks for all the votes thus far! We've gotten the binding +1's to close
> > the release, though are there contributors who could kick the tires on
> > S3Guard and YARN TSv2 alpha2? These are the two new features merged since
> > alpha4, so it'd be good to get some coverage.
> >
> >
> >
> > On Tue, Oct 3, 2017 at 9:45 AM, Brahma Reddy Battula 
> > wrote:
> >
> > >
> > > Thanks Andrew.
> > >
> > > +1 (non binding)
> > >
> > > --Built from source
> > > --installed 3 node HA cluster
> > > --Verified shell commands and UI
> > > --Ran wordcount/pic jobs
> > >
> > >
> > >
> > >
> > > On Fri, 29 Sep 2017 at 5:34 AM, Andrew Wang 
> > > wrote:
> > >
> > >> Hi all,
> > >>
> > >> Let me start, as always, by thanking the many, many contributors who
> > >> helped
> > >> with this release! I've prepared an RC0 for 3.0.0-beta1:
> > >>
> > >> http://home.apache.org/~wang/3.0.0-beta1-RC0/
> > >>
> > >> This vote will run five days, ending on Nov 3rd at 5PM Pacific.
> > >>
> > >> beta1 contains 576 fixed JIRA issues comprising a number of bug fixes,
> > >> improvements, and feature enhancements. Notable additions include the
> > >> addition of YARN Timeline Service v2 alpha2, S3Guard, completion of
> the
> > >> shaded client, and HDFS erasure coding pluggable policy support.
> > >>
> > >> I've done the traditional testing of running a Pi job on a pseudo
> > cluster.
> > >> My +1 to start.
> > >>
> > >> We're working internally on getting this run through our integration
> > test
> > >> rig. I'm hoping Vijay or Ray can ring in with a +1 once that's
> complete.
> > >>
> > >> Best,
> > >> Andrew
> > >>
> > > --
> > >
> > >
> > >
> > > --Brahma Reddy Battula
> > >
> >
>
>
>
> --
> John
>


Re: [DISCUSS] official docker image(s) for hadoop

2017-09-13 Thread Eric Badger
+1 definitely think an official Hadoop docker image (possibly 1 per major
or minor release) would be a positive both for contributors and for users
of Hadoop.

Eric

On Wed, Sep 13, 2017 at 1:19 PM, Wangda Tan  wrote:

> +1 to add Hadoop docker image for easier testing / prototyping, it gonna be
> super helpful!
>
> Thanks,
> Wangda
>
> On Wed, Sep 13, 2017 at 10:48 AM, Miklos Szegedi <
> miklos.szeg...@cloudera.com> wrote:
>
> > Marton, thank you for working on this. I think Official Docker images for
> > Hadoop would be very useful for a lot of reasons. I think that it is
> better
> > to have a coordinated effort with production ready base images with
> > dependent images for prototyping. Does anyone else have an opinion about
> > this?
> >
> > Thank you,
> > Miklos
> >
> > On Fri, Sep 8, 2017 at 5:45 AM, Marton, Elek  wrote:
> >
> > >
> > > TL;DR: I propose to create official hadoop images and upload them to
> the
> > > dockerhub.
> > >
> > > GOAL/SCOPE: I would like improve the existing documentation with
> > > easy-to-use docker based recipes to start hadoop clusters with various
> > > configuration.
> > >
> > > The images also could be used to test experimental features. For
> example
> > > ozone could be tested easily with these compose file and configuration:
> > >
> > > https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> > >
> > > Or even the configuration could be included in the compose file:
> > >
> > > https://github.com/elek/hadoop/blob/docker-2.8.0/example/doc
> > > ker-compose.yaml
> > >
> > > I would like to create separated example compose files for federation,
> > ha,
> > > metrics usage, etc. to make it easier to try out and understand the
> > > features.
> > >
> > > CONTEXT: There is an existing Jira https://issues.apache.org/jira
> > > /browse/HADOOP-13397
> > > But it’s about a tool to generate production quality docker images
> > > (multiple types, in a flexible way). If no objections, I will create a
> > > separated issue to create simplified docker images for rapid
> prototyping
> > > and investigating new features. And register the branch to the
> dockerhub
> > to
> > > create the images automatically.
> > >
> > > MY BACKGROUND: I am working with docker based hadoop/spark clusters
> quite
> > > a while and run them succesfully in different environments (kubernetes,
> > > docker-swarm, nomad-based scheduling, etc.) My work is available from
> > here:
> > > https://github.com/flokkr but they could handle more complex use cases
> > > (eg. instrumenting java processes with btrace, or read/reload
> > configuration
> > > from consul).
> > >  And IMHO in the official hadoop documentation it’s better to suggest
> to
> > > use official apache docker images and not external ones (which could be
> > > changed).
> > >
> > > Please let me know if you have any comments.
> > >
> > > Marton
> > >
> > > -
> > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-08-02 Thread Eric Badger
- Verified all checksums signatures
- Built from src on macOS 10.12.6 with Java 1.8.0u65
- Deployed single node pseudo cluster
- Successfully ran sleep and pi jobs
- Navigated the various UIs 

+1 (non-binding)

Thanks,

Eric


On Wednesday, August 2, 2017 12:31 PM, Chris Douglas  
wrote:



On Wed, Aug 2, 2017 at 12:02 AM, Zhe Zhang  wrote:
> Quick question for Chris: was your +1 for the RC, or to support
> Konstantin's statement regarding packaging?

For Konstantin's statement. I haven't had a chance to look through the
RC yet, but it's on my list. -C

> On Mon, Jul 31, 2017 at 3:56 PM Chris Douglas  wrote:
>
>> On Mon, Jul 31, 2017 at 3:02 PM, Konstantin Shvachko
>>  wrote:
>> > For the packaging, here is the exact phrasing from the sited
>> release-policy
>> > document relevant to binaries:
>> > "As a convenience to users that might not have the appropriate tools to
>> > build a compiled version of the source, binary/bytecode packages MAY be
>> > distributed alongside official Apache releases. In all such cases, the
>> > binary/bytecode package MUST have the same version number as the source
>> > release and MUST only add binary/bytecode files that are the result of
>> > compiling that version of the source code release and its dependencies."
>> > I don't think my binary package violates any of these.
>>
>> +1 The PMC VOTE applies to source code, only. If someone wants to
>> rebuild the binary tarball with native libs and replace this one,
>> that's fine.
>>
>> My reading of the above is that source code must be distributed with
>> binaries, not that we omit the source code from binary releases... -C
>>
>> > But I'll upload an additional tar.gz with native bits and no src, as you
>> > guys requested.
>> > Will keep it as RC0 as there is no source code change and it comes from
>> the
>> > same build.
>> > Hope this is satisfactory.
>> >
>> > Thanks,
>> > --Konstantin
>> >
>> > On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wang 
>> > wrote:
>> >
>> >> I agree with Brahma on the two issues flagged (having src in the binary
>> >> tarball, missing native libs). These are regressions from prior
>> releases.
>> >>
>> >> As an aside, "we release binaries as a convenience" doesn't relax the
>> >> quality bar. The binaries are linked on our website and distributed
>> through
>> >> official Apache channels. They have to adhere to Apache release
>> >> requirements. And, most users consume our work via Maven dependencies,
>> >> which are binary artifacts.
>> >>
>> >> http://www.apache.org/legal/release-policy.html goes into this in more
>> >> detail. A release must minimally include source packages, and can also
>> >> include binary artifacts.
>> >>
>> >> Best,
>> >> Andrew
>> >>
>> >> On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko <
>> >> shv.had...@gmail.com> wrote:
>> >>
>> >>> To avoid any confusion in this regard. I built RC0 manually in
>> compliance
>> >>> with Apache release policy
>> >>> http://www.apache.org/legal/release-policy.html
>> >>> I edited the HowToReleasePreDSBCR page to make sure people don't use
>> >>> Jenkins option for building.
>> >>>
>> >>> A side note. This particular build is broken anyways, so no worries
>> there.
>> >>> I think though it would be useful to have it working for testing and
>> as a
>> >>> packaging standard.
>> >>>
>> >>> Thanks,
>> >>> --Konstantin
>> >>>
>> >>> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer <
>> >>> a...@effectivemachines.com
>> >>> > wrote:
>> >>>
>> >>> >
>> >>> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko <
>> >>> shv.had...@gmail.com>
>> >>> > wrote:
>> >>> > >
>> >>> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR
>> >>> >
>> >>> > FYI:
>> >>> >
>> >>> > If you are using ASF Jenkins to create an ASF release
>> >>> > artifact, it's pretty much an automatic vote failure as any such
>> >>> release is
>> >>> > in violation of ASF policy.
>> >>> >
>> >>> >
>> >>>
>> >>
>> >>
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>> --
> Zhe Zhang
> Apache Hadoop Committer
> http://zhe-thoughts.github.io/about/ | @oldcap


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-06 Thread Eric Badger
- Verified all checksums signatures
- Built from src on macOS 10.12.5 with Java 1.8.0u65
- Deployed single node pseudo cluster
- Successfully ran sleep and pi jobs
- Navigated the various UIs 

+1 (non-binding)

Thanks,

Eric

On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri  wrote:



Thanks for the hard work on this!  +1 (non-binding)

- Built from source tarball on OS X w/ Java 1.8.0_45.
- Deployed mini/pseudo cluster.
- Ran grep and wordcount examples.
- Poked around ResourceManager and JobHistory UIs.
- Ran all s3a integration tests in US West 2.



On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen  wrote:

> Thanks Andrew!
> +1 (non-binding)
>
>- Verified md5's, checked tarball sizes are reasonable
>- Built source tarball and deployed a pseudo-distributed cluster with
>hdfs/kms
>- Tested basic hdfs/kms operations
>- Sanity checked webuis/logs
>
>
> -Xiao
>
> On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge  wrote:
>
> > +1 (non-binding)
> >
> >
> >- Verified checksums and signatures of the tarballs
> >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5
> >- Cloud connectors:
> >   - A few S3A integration tests
> >   - A few ADL live unit tests
> >- Deployed both binary and built source to a pseudo cluster, passed
> the
> >following sanity tests in insecure, SSL, and SSL+Kerberos mode:
> >   - HDFS basic and ACL
> >   - DistCp basic
> >   - WordCount (skipped in Kerberos mode)
> >   - KMS and HttpFS basic
> >
> > Thanks Andrew for the great effort!
> >
> > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne  > invalid>
> > wrote:
> >
> > > Thanks Andrew.
> > > I downloaded the source, built it, and installed it onto a pseudo
> > > distributed 4-node cluster.
> > >
> > > I ran mapred and streaming test cases, including sleep and wordcount.
> > > +1 (non-binding)
> > > -Eric
> > >
> > >   From: Andrew Wang 
> > >  To: "common-dev@hadoop.apache.org" ; "
> > > hdfs-...@hadoop.apache.org" ; "
> > > mapreduce-...@hadoop.apache.org" ; "
> > > yarn-...@hadoop.apache.org" 
> > >  Sent: Thursday, June 29, 2017 9:41 PM
> > >  Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
> > >
> > > Hi all,
> > >
> > > As always, thanks to the many, many contributors who helped with this
> > > release! I've prepared an RC0 for 3.0.0-alpha4:
> > >
> > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/
> > >
> > > The standard 5-day vote would run until midnight on Tuesday, July 4th.
> > > Given that July 4th is a holiday in the US, I expect this vote might
> have
> > > to be extended, but I'd like to close the vote relatively soon after.
> > >
> > > I've done my traditional testing of a pseudo-distributed cluster with a
> > > single task pi job, which was successful.
> > >
> > > Normally my testing would end there, but I'm slightly more confident
> this
> > > time. At Cloudera, we've successfully packaged and deployed a snapshot
> > from
> > > a few days ago, and run basic smoke tests. Some bugs found from this
> > > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients,
> > and
> > > the revert of HDFS-11696, which broke NN QJM HA setup.
> > >
> > > Vijay is working on a test run with a fuller test suite (the results of
> > > which we can hopefully post soon).
> > >
> > > My +1 to start,
> > >
> > > Best,
> > > Andrew
> > >
> > >
> > >
> > >
> >
> >
> >
> > --
> > John
> >
>

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14440) Add metrics for connections dropped

2017-05-19 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-14440:


 Summary: Add metrics for connections dropped
 Key: HADOOP-14440
 URL: https://issues.apache.org/jira/browse/HADOOP-14440
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Eric Badger
Assignee: Eric Badger


Will be useful to figure out when the NN is getting overloaded with more 
connections than it can handle



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14377) Increase Common tests from 1 second to 10 seconds

2017-05-03 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-14377:


 Summary: Increase Common tests from 1 second to 10 seconds
 Key: HADOOP-14377
 URL: https://issues.apache.org/jira/browse/HADOOP-14377
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


1 second test timeouts are susceptible to failure on overloaded or otherwise 
slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14372) TestSymlinkLocalFS timeouts are too low

2017-05-02 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-14372:


 Summary: TestSymlinkLocalFS timeouts are too low
 Key: HADOOP-14372
 URL: https://issues.apache.org/jira/browse/HADOOP-14372
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


1 second timeouts are too aggressive for heavily loaded or slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14320) TestIPC.testIpcWithReaderQueuing fails intermittently

2017-04-18 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-14320:


 Summary: TestIPC.testIpcWithReaderQueuing fails intermittently
 Key: HADOOP-14320
 URL: https://issues.apache.org/jira/browse/HADOOP-14320
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


{noformat}
org.mockito.exceptions.verification.TooLittleActualInvocations: 
callQueueManager.put();
Wanted 2 times:
-> at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
But was 1 time:
-> at org.apache.hadoop.ipc.Server.queueCall(Server.java:2466)

at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
at 
org.apache.hadoop.ipc.TestIPC.testIpcWithReaderQueuing(TestIPC.java:738)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Local trunk build is flaky

2017-04-17 Thread Eric Badger
For what it's worth, I successfully built trunk just now on macOS Sierra using 
mvn install -Pdist -Dtar -DskipTests -DskipShade -Dmaven.javadoc.skip
 

On Monday, April 17, 2017 12:32 PM, Zhe Zhang  wrote:
 

 Starting from last week, building trunk on my local Mac has been flaky. I
haven't tried Linux yet. The error is:

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-enforcer-plugin:1.4.1:enforce (clean) on
project hadoop-assemblies: Some Enforcer rules have failed. Look above for
specific messages explaining why the rule failed. -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
goal org.apache.maven.plugins:maven-enforcer-plugin:1.4.1:enforce (clean)
on project hadoop-assemblies: Some Enforcer rules have failed. Look above
for specific messages explaining why the rule failed.
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
...

Anyone else seeing the same issue?

Thanks,

-- 
Zhe Zhang
Apache Hadoop Committer
http://zhe-thoughts.github.io/about/ | @oldcap


   

[jira] [Created] (HADOOP-14306) TestLocalFileSystem tests have very low timeouts

2017-04-13 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-14306:


 Summary: TestLocalFileSystem tests have very low timeouts
 Key: HADOOP-14306
 URL: https://issues.apache.org/jira/browse/HADOOP-14306
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


Most tests have a timeout of 1 second, which is much too low, especially if 
there is a spinning disk involved. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14277) TestTrash.testTrashRestarts is flaky

2017-04-04 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-14277:


 Summary: TestTrash.testTrashRestarts is flaky
 Key: HADOOP-14277
 URL: https://issues.apache.org/jira/browse/HADOOP-14277
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger


{noformat}
junit.framework.AssertionFailedError: Expected num of checkpoints is 2, but 
actual is 3 expected:<2> but was:<3>
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.failNotEquals(Assert.java:329)
at junit.framework.Assert.assertEquals(Assert.java:78)
at junit.framework.Assert.assertEquals(Assert.java:234)
at junit.framework.TestCase.assertEquals(TestCase.java:401)
at 
org.apache.hadoop.fs.TestTrash.verifyAuditableTrashEmptier(TestTrash.java:892)
at org.apache.hadoop.fs.TestTrash.testTrashRestarts(TestTrash.java:593)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

2017-03-15 Thread Eric Badger
All on MacOS Sierra

Verified signatures
  - Minor note: Junping, I had a hard time finding your key. I grabbed the keys 
for hadoop from 
http://home.apache.org/keys/group/hadoop.asc and you had a key there, but it 
wasn't the one that you signed this commit with. Then with some help from Jason 
I found the correct key at 
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS. So it would be 
nice if those were in sync. 
Compiled from source
Deployed pseudo-distributed cluster
Ran some sample MR jobs

+1 (non-binding)

Thanks,

Eric


On Wednesday, March 15, 2017 2:58 PM, Junping Du  wrote:



The latest commit on RC2 is: e51312e8e106efb2ebd4844eecacb51026fac8b7.
btw, I think tags are immutable. Isn't it?

Thanks,

Junping


From: Steve Loughran
Sent: Wednesday, March 15, 2017 12:30 PM
To: Junping Du
Cc: common-dev@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

> On 14 Mar 2017, at 08:41, Junping Du  wrote:
>
> Hi all,
> With several important fixes get merged last week, I've created a new 
> release candidate (RC2) for Apache Hadoop 2.8.0.
>
> This is the next minor release to follow up 2.7.0 which has been released 
> for more than 1 year. It comprises 2,919 fixes, improvements, and new 
> features. Most of these commits are released for the first time in branch-2.
>
>  More information about the 2.8.0 release plan can be found here: 
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>
>  Please note that RC0 and RC1 are not voted public because significant 
> issues are found just after RC tag getting published.
>
>  The RC is available at: 
> http://home.apache.org/~junping_du/hadoop-2.8.0-RC2
>
>  The RC tag in git is: release-2.8.0-RC2

given tags are so easy to move, we need to be relying on one or more of:
-the commit ID,
-the tag being signed

Junping: what is the commit Id for the release?

>
>  The maven artifacts are available via repository.apache.org at: 
> https://repository.apache.org/content/repositories/orgapachehadoop-1056
>

thanks, I'll play with these downstream, as well as checking out and trying to 
build on windows

>  Please try the release and vote; the vote will run for the usual 5 days, 
> ending on 03/20/2017 PDT time.
>
> Thanks,
>
> Junping

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha2 RC0

2017-01-24 Thread Eric Badger
+1 (non-binding)
- Verified signatures and md5- Built from source- Started single-node cluster 
on my mac- Ran some sleep jobs
Eric 

On Tuesday, January 24, 2017 4:32 PM, Yufei Gu  wrote:
 

 Hi Andrew,

Thanks for working on this.

+1 (Non-Binding)

1. Downloaded the binary and verified the md5.
2. Deployed it on 3 node cluster with 1 ResourceManager and 2 NodeManager.
3. Set YARN to use Fair Scheduler.
4. Ran MapReduce jobs Pi
5. Verified Hadoop version command output is correct.

Best,

Yufei

On Tue, Jan 24, 2017 at 3:02 AM, Marton Elek  wrote:

> ]>
> > minicluster is kind of weird on filesystems that don't support mixed
> case, like OS X's default HFS+.
> >
> > $  jar tf hadoop-client-minicluster-3.0.0-alpha3-SNAPSHOT.jar | grep -i
> license
> > LICENSE.txt
> > license/
> > license/LICENSE
> > license/LICENSE.dom-documentation.txt
> > license/LICENSE.dom-software.txt
> > license/LICENSE.sax.txt
> > license/NOTICE
> > license/README.dom.txt
> > license/README.sax.txt
> > LICENSE
> > Grizzly_THIRDPARTYLICENSEREADME.txt
>
>
> I added a patch to https://issues.apache.org/jira/browse/HADOOP-14018 to
> add the missing META-INF/LICENSE.txt to the shaded files.
>
> Question: what should be done with the other LICENSE files in the
> minicluster. Can we just exclude them (from legal point of view)?
>
> Regards,
> Marton
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


   

[jira] [Created] (HADOOP-13976) Path globbing does not match newlines

2017-01-11 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-13976:


 Summary: Path globbing does not match newlines
 Key: HADOOP-13976
 URL: https://issues.apache.org/jira/browse/HADOOP-13976
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


Need to add the DOTALL flag to allow for newlines to be accepted as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-11 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-13709:


 Summary: Clean up subprocesses spawned by Shell.java:runCommand 
when the shell process exits
 Key: HADOOP-13709
 URL: https://issues.apache.org/jira/browse/HADOOP-13709
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


The runCommand code in Shell.java can get into a situation where it will ignore 
InterruptedExceptions and refuse to shutdown due to being in I/O waiting for 
the return value of the subprocess that was spawned. We need to allow for the 
subprocess to be interrupted and killed when the shell process gets killed. 
Currently the JVM will shutdown and all of the subprocesses will be orphaned 
and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13571) ServerSocketUtil.getPort() should use loopback address, not 0.0.0.0

2016-09-01 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-13571:


 Summary: ServerSocketUtil.getPort() should use loopback address, 
not 0.0.0.0
 Key: HADOOP-13571
 URL: https://issues.apache.org/jira/browse/HADOOP-13571
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger


Using 0.0.0.0 to check for a free port will succeed even if there's something 
bound to that same port on the loopback interface. Since this function is used 
primarily in testing, it should be checking the loopback interface for free 
ports.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
Well that's embarrassing. I had accidentally slightly renamed my 
log4j.properties file in my conf directory, so it was there, just not being 
read. Apologies for the unnecessary spam. With this and the public key from 
Andrew, I give my non-binding +1. 

Eric



On Tuesday, August 30, 2016 4:11 PM, Allen Wittenauer 
<a...@effectivemachines.com> wrote:


> On Aug 30, 2016, at 2:06 PM, Eric Badger <ebad...@yahoo-inc.com.INVALID> 
> wrote:
> 
> 
> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.

^^


> 
> After running the above command, the RM UI showed a successful job, but as 
> you can see, I did not have anything printed onto the command line. Hopefully 
> this is just a misconfiguration on my part, but I figured that I would point 
> it out just in case.


It gave you a very important message in the output ...

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
I don't know why my email client keeps getting rid of all of my spacing. 
Resending the same email so that it is actually legible...

All on OSX 10.11.6:
- Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.
- Built from source
- Deployed a pseudo-distributed clusterRan a few sample jobs
- Poked around the RM UI
- Poked around the attached website locally via the tarball


I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.


ebadger@foo: env | grep HADOOP
HADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/
HADOOP_CONF_DIR=/Users/ebadger/conf
ebadger@foo: $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
ebadger@foo:


After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.


Thanks,


Eric



On Tuesday, August 30, 2016 4:00 PM, Eric Badger 
<ebad...@yahoo-inc.com.INVALID> wrote:



All on OSX 10.11.6:
Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.Built from 
sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked around 
the RM UIPoked around the attached website locally via the tarball
I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.
ebadger@foo: env | grep 
HADOOPHADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
 $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be 
incomplete.ebadger@foo:
After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.
Thanks,
Eric



On Tuesday, August 30, 2016 12:58 PM, Andrew Wang 
<andrew.w...@cloudera.com> wrote:


I'll put my own +1 on it:

* Built from source
* Started pseudo cluster and ran Pi job successfully

On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang <z...@apache.org> wrote:

>
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
>
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.
>
> I'm still testing the RC:
> - Downloaded and verified checksum
> - Built from source
> - Will start small cluster and test simple programs, focusing on EC
> functionalities
>
> -- Zhe
>
> On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang <andrew.w...@cloudera.com>
> wrote:
>
>> Hi all,
>>
>> Thanks to the combined work of many, many contributors, here's an RC0 for
>> 3.0.0

Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
All on OSX 10.11.6:
Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.Built from 
sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked around 
the RM UIPoked around the attached website locally via the tarball
I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.
ebadger@foo: env | grep 
HADOOPHADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
 $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be 
incomplete.ebadger@foo:
After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.
Thanks,
Eric


On Tuesday, August 30, 2016 12:58 PM, Andrew Wang 
 wrote:
 

 I'll put my own +1 on it:

* Built from source
* Started pseudo cluster and ran Pi job successfully

On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:

>
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
>
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.
>
> I'm still testing the RC:
> - Downloaded and verified checksum
> - Built from source
> - Will start small cluster and test simple programs, focusing on EC
> functionalities
>
> -- Zhe
>
> On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> Thanks to the combined work of many, many contributors, here's an RC0 for
>> 3.0.0-alpha1:
>>
>> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>>
>> alpha1 is the first in a series of planned alpha releases leading up to
>> GA.
>> The objective is to get an artifact out to downstreams for testing and to
>> iterate quickly based on their feedback. So, please keep that in mind when
>> voting; hopefully most issues can be addressed by future alphas rather
>> than
>> future RCs.
>>
>> Sorry for getting this out on a Tuesday, but I'd still like this vote to
>> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
>> if we lack the votes.
>>
>> Please try it out and let me know what you think.
>>
>> Best,
>> Andrew
>>
>


   

Re: [VOTE] Release Apache Hadoop 2.7.3 RC2

2016-08-23 Thread Eric Badger
+1 non-binding
All on OSX 10.11.6:Verified signatures of source and binary tarballsCompiled 
from sourceDeployed pseudo clusterRan sample jobs
Thanks,
Eric 

On Tuesday, August 23, 2016 1:26 PM, Xuan Gong  
wrote:
 

 +1 Binding.

Built from source code.
Deployed single node cluster.
Successfully ran some example jobs.

Thanks

Xuan Gong

> On Aug 23, 2016, at 10:33 AM, Sunil Govind  wrote:
> 
> Hi All
> 
> +1 non-binding
> 
> 1. Built from source and deployed 2 node cluster
> 2. Tested node labels basic scenarios
> 3. Ran sleep and wordcount.
> 4. Verified few cli commands and web UI.
> 
> Thanks
> Sunil
> 
> On Thu, Aug 18, 2016 at 7:35 AM Vinod Kumar Vavilapalli 
> wrote:
> 
>> Hi all,
>> 
>> I've created a new release candidate RC2 for Apache Hadoop 2.7.3.
>> 
>> As discussed before, this is the next maintenance release to follow up
>> 2.7.2.
>> 
>> The RC is available for validation at:
>> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC2/ <
>> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC2/>
>> 
>> The RC tag in git is: release-2.7.3-RC2
>> 
>> The maven artifacts are available via repository.apache.org <
>> http://repository.apache.org/> at
>> https://repository.apache.org/content/repositories/orgapachehadoop-1046 <
>> https://repository.apache.org/content/repositories/orgapachehadoop-1046>
>> 
>> The release-notes are inside the tar-balls at location
>> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I
>> hosted this at
>> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC2/releasenotes.html <
>> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC2/releasenotes.html> for
>> your quick perusal.
>> 
>> As you may have noted,
>> - few issues with RC0 forced a RC1 [1]
>> - few more issues with RC1 forced a RC2 [2]
>> - a very long fix-cycle for the License & Notice issues (HADOOP-12893)
>> caused 2.7.3 (along with every other Hadoop release) to slip by quite a
>> bit. This release's related discussion thread is linked below: [3].
>> 
>> Please try the release and vote; the vote will run for the usual 5 days.
>> 
>> Thanks,
>> Vinod
>> 
>> [1] [VOTE] Release Apache Hadoop 2.7.3 RC0:
>> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106
>> 
>> [2] [VOTE] Release Apache Hadoop 2.7.3 RC1:
>> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg26336.html <
>> https://www.mail-archive.com/hdfs-dev@hadoop.apache.org/msg26336.html>
>> [3] 2.7.3 release plan:
>> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html <
>> http://markmail.org/thread/6yv2fyrs4jlepmmr>


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org


   

[jira] [Created] (HADOOP-13462) Increase timeout of TestAmFilter.testFilter

2016-08-03 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-13462:


 Summary: Increase timeout of TestAmFilter.testFilter
 Key: HADOOP-13462
 URL: https://issues.apache.org/jira/browse/HADOOP-13462
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Priority: Minor


Timeout is currently only 1 second. Saw a timeout failure

{noformat}
java.lang.Exception: test timed out after 1000 milliseconds
at java.util.zip.ZipFile.getEntry(Native Method)
at java.util.zip.ZipFile.getEntry(ZipFile.java:311)
at java.util.jar.JarFile.getEntry(JarFile.java:240)
at java.util.jar.JarFile.getJarEntry(JarFile.java:223)
at sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:841)
at sun.misc.URLClassPath.getResource(URLClassPath.java:199)
at java.net.URLClassLoader$1.run(URLClassLoader.java:364)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:455)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:367)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:455)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:367)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:455)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:367)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:455)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:367)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:455

Re: [VOTE] Release Apache Hadoop 2.7.3 RC0

2016-07-26 Thread Eric Badger
* Verified mds and pgp signatures of both source and binary* Built tarball from 
source on OS X 10.11.6 (El Capitan)* Deployed in pseudo-distributed mode* Ran 
sleep jobs and other randomly selected tests on both MapReduce and Tez* 
Visually verified the RM and history server UIs 
Thanks,
Eric 

On Tuesday, July 26, 2016 3:23 PM, Karthik Kambatla  
wrote:
 

 IIRR, the vote is on source artifacts and binaries are for convenience.

If that is right, I am open to either options - do another RC or continue
this vote and fix the binary artifacts.

On Tue, Jul 26, 2016 at 12:11 PM, Vinod Kumar Vavilapalli <
vino...@apache.org> wrote:

> Thanks Daniel and Wei.
>
> I think these are worth fixing, I’m withdrawing this RC. Will look at
> fixing these issues and roll a new candidate with the fixes as soon as
> possible.
>
> Thanks
> +Vinod
>
> > On Jul 26, 2016, at 11:05 AM, Wei-Chiu Chuang 
> wrote:
> >
> > I noticed two issues:
> >
> > (1) I ran hadoop checknative, but it seems the binary tarball was not
> compiled with native library for Linux. On the contrary, the Hadoop built
> from source tarball with maven -Pnative can find the native libraries on
> the same host.
> >
> > (2) I noticed that the release dates in CHANGES.txt in tag
> release-2.7.3-RC0 are set to Release 2.7.3 - 2016-07-27.
> > However, the release dates in CHANGES.txt in the source and binary tar
> balls are set to Release 2.7.3 - 2016-08-01. This is probably a non-issue
> though.
> >
> > * Downloaded source and binary.
> > * Verified signature.
> > * Verified checksum.
> > * Built from source using 64-bit Java 7 (1.7.0.75) and 8 (1.8.0.05).
> Both went fine.
> > * Ran hadoop checknative
> >
> > On Tue, Jul 26, 2016 at 9:12 AM, Rushabh Shah
> >
> wrote:
> > Thanks Vinod for all the release work !
> > +1 (non-binding).
> > * Downloaded from source and built it.* Deployed a pseudo distributed
> cluster.
> > * Ran some sample jobs: sleep, pi* Ran some dfs commands.* Everything
> works fine.
> >
> >
> >    On Friday, July 22, 2016 9:16 PM, Vinod Kumar Vavilapalli <
> vino...@apache.org > wrote:
> >
> >
> >  Hi all,
> >
> > I've created a release candidate RC0 for Apache Hadoop 2.7.3.
> >
> > As discussed before, this is the next maintenance release to follow up
> 2.7.2.
> >
> > The RC is available for validation at:
> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/ <
> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/> <
> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/ <
> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/>>
> >
> > The RC tag in git is: release-2.7.3-RC0
> >
> > The maven artifacts are available via repository.apache.org <
> http://repository.apache.org/>  http://repository.apache.org/>> at
> https://repository.apache.org/content/repositories/orgapachehadoop-1040/ <
> https://repository.apache.org/content/repositories/orgapachehadoop-1040/>
>   >>
> >
> > The release-notes are inside the tar-balls at location
> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I
> hosted this at
> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/releasenotes.html <
> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/releasenotes.html> <
> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html <
> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html>>
> for your quick perusal.
> >
> > As you may have noted, a very long fix-cycle for the License & Notice
> issues (HADOOP-12893) caused 2.7.3 (along with every other Hadoop release)
> to slip by quite a bit. This release's related discussion thread is linked
> below: [1].
> >
> > Please try the release and vote; the vote will run for the usual 5 days.
> >
> > Thanks,
> > Vinod
> >
> > [1]: 2.7.3 release plan:
> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html <
> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html> <
> http://markmail.org/thread/6yv2fyrs4jlepmmr <
> http://markmail.org/thread/6yv2fyrs4jlepmmr>>
> >
> >
> >
>
>

  

[jira] [Reopened] (HADOOP-10980) TestActiveStandbyElector fails occasionally in trunk

2016-05-10 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger reopened HADOOP-10980:
--
  Assignee: Eric Badger

This test failed for me locally when I was running a separate zookeeper 
instance. If we specify a port number, as [~ste...@apache.org] suggested, the 
test passes. For example, changing the port to 22, since that will likely only 
ever be used for SSH. 

> TestActiveStandbyElector fails occasionally in trunk
> 
>
> Key: HADOOP-10980
> URL: https://issues.apache.org/jira/browse/HADOOP-10980
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>    Assignee: Eric Badger
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Common-trunk/1211/consoleFull :
> {code}
> Running org.apache.hadoop.ha.TestActiveStandbyElector
> Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.7 sec <<< 
> FAILURE! - in org.apache.hadoop.ha.TestActiveStandbyElector
> testWithoutZKServer(org.apache.hadoop.ha.TestActiveStandbyElector)  Time 
> elapsed: 0.051 sec  <<< FAILURE!
> java.lang.AssertionError: Did not throw zookeeper connection loss exceptions!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.ha.TestActiveStandbyElector.testWithoutZKServer(TestActiveStandbyElector.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13053) FS Shell should use File system API, not FileContext

2016-04-22 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-13053:


 Summary: FS Shell should use File system API, not FileContext
 Key: HADOOP-13053
 URL: https://issues.apache.org/jira/browse/HADOOP-13053
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger


FS Shell is File System based, but it is using the FileContext API. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)