Re: [DISCUSS] Change project style guidelines to allow line length 100

2021-05-20 Thread John Zhuge
5a8cd1100580f27dc%401430849118%40%3Ccommon-dev.hadoop.apache.org%3E>
> > > > > > <
> > > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/3e1785cbbe14dcab9bb970fa0f534811cfe00795a8cd1100580f27dc%401430849118%40%3Ccommon-dev.hadoop.apache.org%3E>
> > > > > > >>;
> > > > > > >> >
> > > > > > >> > > Thanks,
> > > > > > >> > > Akira
> > > > > > >> > >
> > > > > > >> > > On Thu, May 20, 2021 at 6:36 AM Sean Busbey
> > > > > > >> mailto:sbus...@apple.com.invalid
> > >>
> > > > > > >> > wrote:
> > > > > > >> > >>
> > > > > > >> > >> Hello!
> > > > > > >> > >>
> > > > > > >> > >> What do folks think about changing our line
> length
> > > > > > >> guidelines to allow
> > > > > > >> > for 100 character width?
> > > > > > >> > >>
> > > > > > >> > >> Currently, we tell folks to follow the sun style
> > > guide
> > > > > > with
> > > > > > >> some
> > > > > > >> > exception unrelated to line length. That guide says width
> > of
> > > 80
> > > > > is
> > > > > > the
> > > > > > >> > standard and our current check style rules act as
> > > enforcement.
> > > > > > >> > >>
> > > > > > >> > >> Looking at the current trunk codebase our
> nightly
> > > > build
> > > > > > >> shows a total
> > > > > > >> > of ~15k line length violations; it’s about 18% of
> > identified
> > > > > > >> checkstyle
> > > > > > >> > issues.
> > > > > > >> > >>
> > > > > > >> > >> The vast majority of those line length
> violations
> > > are
> > > > <=
> > > > > > 100
> > > > > > >> characters
> > > > > > >> > long. 100 characters happens to be the length for the
> > Google
> > > > Java
> > > > > > >> Style
> > > > > > >> > Guide, another commonly adopted style guide for java
> > > projects,
> > > > > so I
> > > > > > >> suspect
> > > > > > >> > these longer lines leaking past the checkstyle precommit
> > > > warning
> > > > > > >> might be a
> > > > > > >> > reflection of committers working across multiple java
> > > > codebases.
> > > > > > >> > >>
> > > > > > >> > >> I don’t feel strongly about lines being longer,
> > but
> > > I
> > > > > > would
> > > > > > >> like to
> > > > > > >> > move towards more consistent style enforcement as a
> > project.
> > > > > > Updating
> > > > > > >> our
> > > > > > >> > project guidance to allow for 100 character lines would
> > > reduce
> > > > > the
> > > > > > >> > likelihood that folks bringing in new contributions need
> a
> > > > > > precommit
> > > > > > >> test
> > > > > > >> > cycle to get the formatting correct.
> > > > > > >> > >>
> > > > > > >> > >> Does anyone feel strongly about keeping the line
> > > > length
> > > > > > >> limit at 80
> > > > > > >> > characters?
> > > > > > >> > >>
> > > > > > >> > >> Does anyone feel strongly about contributions
> > coming
> > > > in
> > > > > > that
> > > > > > >> clear up
> > > > > > >> > line length violations?
> > > > > > >> > >>
> > > > > > >> > >>
> > > > > > >> > >>
> > > > > > >>
> > > > -
> > > > > > >> > >> To unsubscribe, e-mail:
> > > > > > >> common-dev-unsubscr...@hadoop.apache.org  > > > > > common-dev-unsubscr...@hadoop.apache.org>
> > > > > > >> > >> For additional commands, e-mail:
> > > > > > >> common-dev-h...@hadoop.apache.org  > > > > > common-dev-h...@hadoop.apache.org>
> > > > > > >> > >>
> > > > > > >> > >
> > > > > > >> > >
> > > > > > >>
> > > > -
> > > > > > >> > > To unsubscribe, e-mail:
> > > > > > common-dev-unsubscr...@hadoop.apache.org  > > > > > common-dev-unsubscr...@hadoop.apache.org>
> > > > > > >> > > For additional commands, e-mail:
> > > > > > >> common-dev-h...@hadoop.apache.org  > > > > > common-dev-h...@hadoop.apache.org>
> > > > > > >> > >
> > > > > > >> >
> > > > > > >> >
> > > > > >
> > -
> > > > > > >> > To unsubscribe, e-mail:
> > > > common-dev-unsubscr...@hadoop.apache.org
> > > > > > <mailto:common-dev-unsubscr...@hadoop.apache.org>
> > > > > > >> > For additional commands, e-mail:
> > > > > common-dev-h...@hadoop.apache.org
> > > > > > <mailto:common-dev-h...@hadoop.apache.org>
> > > > > > >> >
> > > > > > >> >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
-- 
John Zhuge


Re: [VOTE] End of Life Hadoop 2.9

2020-09-02 Thread John Zhuge
+1 (binding)

On Wed, Sep 2, 2020 at 7:51 AM Steve Loughran 
wrote:

> +1
>  binding
>
> On Mon, 31 Aug 2020 at 20:09, Wei-Chiu Chuang  wrote:
>
> > Dear fellow Hadoop developers,
> >
> > Given the overwhelming feedback from the discussion thread
> > https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> > thread for the community to vote and start the 2.9 EOL process.
> >
> > What this entails:
> >
> > (1) an official announcement that no further regular Hadoop 2.9.x
> releases
> > will be made after 2.9.2 (which was GA on 11/19/2019)
> > (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
> >
> >
> > This vote will run for 7 days and will conclude by September 7th, 12:00pm
> > pacific time.
> > Committers are eligible to cast binding votes. Non-committers are
> welcomed
> > to cast non-binding votes.
> >
> > Here is my vote, +1
> >
>


-- 
John Zhuge


Re: [DISCUSS] fate of branch-2.9

2020-08-27 Thread John Zhuge
+1

On Thu, Aug 27, 2020 at 6:01 AM Ayush Saxena  wrote:

> +1
>
> -Ayush
>
> > On 27-Aug-2020, at 6:24 PM, Steve Loughran 
> wrote:
> >
> > 
> >
> > +1
> >
> > are there any Hadoop branch-2 releases planned, ever? If so I'll need to
> backport my s3a directory compatibility patch to whatever is still live.
> >
> >
> >> On Thu, 27 Aug 2020 at 06:55, Wei-Chiu Chuang 
> wrote:
> >> Bump up this thread after 6 months.
> >>
> >> Is anyone still interested in the 2.9 release line? Or are we good to
> start
> >> the EOL process? The 2.9.2 was released in Nov 2018.
> >>
> >> I'd really like to see the community to converge to fewer release lines
> and
> >> make more frequent releases in each line.
> >>
> >> Thanks,
> >> Weichiu
> >>
> >>
> >> On Fri, Mar 6, 2020 at 5:47 PM Wei-Chiu Chuang 
> wrote:
> >>
> >> > I think that's a great suggestion.
> >> > Currently, we make 1 minor release per year, and within each minor
> release
> >> > we bring up 1 thousand to 2 thousand commits in it compared with the
> >> > previous one.
> >> > I can totally understand it is a big bite for users to swallow.
> Having a
> >> > more frequent release cycle, plus LTS and non-LTS releases should
> help with
> >> > this. (Of course we will need to make the release preparation much
> easier,
> >> > which is currently a pain)
> >> >
> >> > I am happy to discuss the release model further in the dev ML. LTS
> v.s.
> >> > non-LTS is one suggestion.
> >> >
> >> > Another similar issue: In the past Hadoop strived to
> >> > maintain compatibility. However, this is no longer sustainable as
> more CVEs
> >> > coming from our dependencies: netty, jetty, jackson ... etc.
> >> > In many cases, updating the dependencies brings breaking changes. More
> >> > recently, especially in Hadoop 3.x, I started to make the effort to
> update
> >> > dependencies much more frequently. How do users feel about this
> change?
> >> >
> >> > On Thu, Mar 5, 2020 at 7:58 AM Igor Dvorzhak 
> >> > wrote:
> >> >
> >> >> Maybe Hadoop will benefit from adopting a similar release and support
> >> >> strategy as Java? I.e. designate some releases as LTS and support
> them for
> >> >> 2 (?) years (it seems that 2.7.x branch was de-facto LTS), other
> non-LTS
> >> >> releases will be supported for 6 months (or until next release). This
> >> >> should allow to reduce maintenance cost of non-LTS release and
> provide
> >> >> conservative users desired stability by allowing them to wait for
> new LTS
> >> >> release and upgrading to it.
> >> >>
> >> >> On Thu, Mar 5, 2020 at 1:26 AM Rupert Mazzucco <
> rupert.mazzu...@gmail.com>
> >> >> wrote:
> >> >>
> >> >>> After recently jumping from 2.7.7 to 2.10 without issue myself, I
> vote
> >> >>> for keeping only the 2.10 line.
> >> >>> It would seem all other 2.x branches can upgrade to a 2.10.x easily
> if
> >> >>> they feel like upgrading at all,
> >> >>> unlike a jump to 3.x, which may require more planning.
> >> >>>
> >> >>> I also vote for having only one main 3.x branch. Why are there
> 3.1.x and
> >> >>> 3.2.x seemingly competing,
> >> >>> and now 3.3.x? For a community that does not have the resources to
> >> >>> manage multiple release lines,
> >> >>> you guys sure like to multiply release lines a lot.
> >> >>>
> >> >>> Cheers
> >> >>> Rupert
> >> >>>
> >> >>> Am Mi., 4. März 2020 um 19:40 Uhr schrieb Wei-Chiu Chuang
> >> >>> :
> >> >>>
> >> >>>> Forwarding the discussion thread from the dev mailing lists to the
> user
> >> >>>> mailing lists.
> >> >>>>
> >> >>>> I'd like to get an idea of how many users are still on Hadoop 2.9.
> >> >>>> Please share your thoughts.
> >> >>>>
> >> >>>> On Mon, Mar 2, 2020 at 6:30 PM Sree Vaddi
> >> >>>>  wrote:
> >> >>>>
> >> >>>>> +1
> >> >>>>>
> >> >>>>> Sent from Yahoo Mail on Android
> >> >>>>>
> >> >>>>>   On Mon, Mar 2, 2020 at 5:12 PM, Wei-Chiu Chuang<
> weic...@apache.org>
> >> >>>>> wrote:   Hi,
> >> >>>>>
> >> >>>>> Following the discussion to end branch-2.8, I want to start a
> >> >>>>> discussion
> >> >>>>> around what's next with branch-2.9. I am hesitant to use the word
> "end
> >> >>>>> of
> >> >>>>> life" but consider these facts:
> >> >>>>>
> >> >>>>> * 2.9.0 was released Dec 17, 2017.
> >> >>>>> * 2.9.2, the last 2.9.x release, went out Nov 19 2018, which is
> more
> >> >>>>> than
> >> >>>>> 15 months ago.
> >> >>>>> * no one seems to be interested in being the release manager for
> 2.9.3.
> >> >>>>> * Most if not all of the active Hadoop contributors are using
> Hadoop
> >> >>>>> 2.10
> >> >>>>> or Hadoop 3.x.
> >> >>>>> * We as a community do not have the cycle to manage multiple
> release
> >> >>>>> line,
> >> >>>>> especially since Hadoop 3.3.0 is coming out soon.
> >> >>>>>
> >> >>>>> It is perhaps the time to gradually reduce our footprint in Hadoop
> >> >>>>> 2.x, and
> >> >>>>> encourage people to upgrade to Hadoop 3.x
> >> >>>>>
> >> >>>>> Thoughts?
> >> >>>>>
> >> >>>>>
>


-- 
John Zhuge


Re: [VOTE] EOL Hadoop branch-2.8

2020-03-08 Thread John Zhuge
+1

On Tue, Mar 3, 2020 at 7:48 PM Bharat Viswanadham  wrote:

> +1
>
> Thanks,
> Bharat
>
> On Tue, Mar 3, 2020 at 7:46 PM Zhankun Tang  wrote:
>
> > Thanks, Wei-Chiu. +1.
> >
> > BR,
> > Zhankun
> >
> > On Wed, 4 Mar 2020 at 08:03, Wilfred Spiegelenburg
> >  wrote:
> >
> > > +1
> > >
> > > Wilfred
> > >
> > > > On 3 Mar 2020, at 05:48, Wei-Chiu Chuang  wrote:
> > > >
> > > > I am sorry I forgot to start a VOTE thread.
> > > >
> > > > This is the "official" vote thread to mark branch-2.8 End of Life.
> This
> > > is
> > > > based on the following thread and the tracking jira (HADOOP-16880
> > > > <https://issues.apache.org/jira/browse/HADOOP-16880>).
> > > >
> > > > This vote will run for 7 days and conclude on March 9th (Mon) 11am
> PST.
> > > >
> > > > Please feel free to share your thoughts.
> > > >
> > > > Thanks,
> > > > Weichiu
> > > >
> > > > On Mon, Feb 24, 2020 at 10:28 AM Wei-Chiu Chuang <
> weic...@cloudera.com
> > >
> > > > wrote:
> > > >
> > > >> Looking at the EOL policy wiki:
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches
> > > >>
> > > >> The Hadoop community can still elect to make security update for
> > EOL'ed
> > > >> releases.
> > > >>
> > > >> I think the EOL is to give more clarity to downstream applications
> > (such
> > > >> as HBase) the guidance of which Hadoop release lines are still
> active.
> > > >> Additionally, I don't think it is sustainable to maintain 6
> concurrent
> > > >> release lines in this big project, which is why I wanted to start
> this
> > > >> discussion.
> > > >>
> > > >> Thoughts?
> > > >>
> > > >> On Mon, Feb 24, 2020 at 10:22 AM Sunil Govindan 
> > > wrote:
> > > >>
> > > >>> Hi Wei-Chiu
> > > >>>
> > > >>> Extremely sorry for the late reply here.
> > > >>> Cud u pls help to add more clarity on defining what will happen for
> > > >>> branch-2.8 when we call EOL.
> > > >>> Does this mean that, no more release coming out from this branch,
> or
> > > some
> > > >>> more additional guidelines?
> > > >>>
> > > >>> - Sunil
> > > >>>
> > > >>>
> > > >>> On Mon, Feb 24, 2020 at 11:47 PM Wei-Chiu Chuang
> > > >>>  wrote:
> > > >>>
> > > >>>> This thread has been running for 7 days and no -1.
> > > >>>>
> > > >>>> Don't think we've established a formal EOL process, but to
> publicize
> > > the
> > > >>>> EOL, I am going to file a jira, update the wiki and post the
> > > >>> announcement
> > > >>>> to general@ and user@
> > > >>>>
> > > >>>> On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia <
> > > >>> dineshc@gmail.com>
> > > >>>> wrote:
> > > >>>>
> > > >>>>> Thanks Wei-Chiu for initiating this.
> > > >>>>>
> > > >>>>> +1 for 2.8 EOL.
> > > >>>>>
> > > >>>>> On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka <
> > aajis...@apache.org>
> > > >>>>> wrote:
> > > >>>>>
> > > >>>>>> Thanks Wei-Chiu for starting the discussion,
> > > >>>>>>
> > > >>>>>> +1 for the EoL.
> > > >>>>>>
> > > >>>>>> -Akira
> > > >>>>>>
> > > >>>>>> On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena <
> ayush...@gmail.com>
> > > >>>> wrote:
> > > >>>>>>
> > > >>>>>>> Thanx Wei-Chiu for initiating this
> > > >>>>>>> +1 for marking 2.8 EOL
> > > >>>>>>>
> > > >>>>>>> -Ayush
> > > >>>>>>>
> > > >>>>>>>> On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang <
> > > >>> weic...@apache.org>
> > > >>>>>> wrote:
> > > >>>>>>>>
> > > >>>>>>>> The last Hadoop 2.8.x release, 2.8.5, was GA on September
> 15th
> > > >>>> 2018.
> > > >>>>>>>>
> > > >>>>>>>> It's been 17 months since the release and the community by and
> > > >>>> large
> > > >>>>>> have
> > > >>>>>>>> moved up to 2.9/2.10/3.x.
> > > >>>>>>>>
> > > >>>>>>>> With Hadoop 3.3.0 over the horizon, is it time to start the
> EOL
> > > >>>>>>> discussion
> > > >>>>>>>> and reduce the number of active branches?
> > > >>>>>>>
> > > >>>>>>>
> > > >>>
> -
> > > >>>>>>> To unsubscribe, e-mail:
> common-dev-unsubscr...@hadoop.apache.org
> > > >>>>>>> For additional commands, e-mail:
> > > >>> common-dev-h...@hadoop.apache.org
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>
> > > >>>>>
> > > >>>>
> > > >>>
> > > >>
> > >
> > > Wilfred Spiegelenburg
> > > Staff Software Engineer
> > >  <https://www.cloudera.com/>
> > >
> >
>
-- 
John Zhuge


Re: [ANNOUNCE] New Apache Hadoop Committer - Stephen O'Donnell

2020-03-04 Thread John Zhuge
Congratulations Stephen!!

On Tue, Mar 3, 2020 at 7:45 PM Zhankun Tang  wrote:

> Congrats! Stephen!
>
> BR,
> Zhankun
>
> On Wed, 4 Mar 2020 at 04:27, Ayush Saxena  wrote:
>
> > Congratulations Stephen!!!
> >
> > -Ayush
> >
> > > On 04-Mar-2020, at 1:51 AM, Bharat Viswanadham 
> > wrote:
> > >
> > > Congratulations Stephen!
> > >
> > > Thanks,
> > > Bharat
> > >
> > >
> > >> On Tue, Mar 3, 2020 at 12:12 PM Wei-Chiu Chuang 
> > wrote:
> > >>
> > >> In bcc: general@
> > >>
> > >> It's my pleasure to announce that Stephen O'Donnell has been elected
> as
> > >> committer on the Apache Hadoop project recognizing his continued
> > >> contributions to the
> > >> project.
> > >>
> > >> Please join me in congratulating him.
> > >>
> > >> Hearty Congratulations & Welcome aboard Stephen!
> > >>
> > >> Wei-Chiu Chuang
> > >> (On behalf of the Hadoop PMC)
> > >>
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >
> >
>


-- 
John Zhuge


Re: [DISCUSS] Making submarine to different release model like Ozone

2019-02-01 Thread John Zhuge
+1

Does Submarine support Jupyter?

On Fri, Feb 1, 2019 at 8:54 AM Zhe Zhang  wrote:

> +1 on the proposal and looking forward to the progress of the project!
>
> On Thu, Jan 31, 2019 at 10:51 PM Weiwei Yang  wrote:
>
> > Thanks for proposing this Wangda, my +1 as well.
> > It is amazing to see the progress made in Submarine last year, the
> > community grows fast and quiet collaborative. I can see the reasons to
> get
> > it release faster in its own cycle. And at the same time, the Ozone way
> > works very well.
> >
> > —
> > Weiwei
> > On Feb 1, 2019, 10:49 AM +0800, Xun Liu , wrote:
> > > +1
> > >
> > > Hello everyone,
> > >
> > > I am Xun Liu, the head of the machine learning team at Netease Research
> > Institute. I quite agree with Wangda.
> > >
> > > Our team is very grateful for getting Submarine machine learning engine
> > from the community.
> > > We are heavy users of Submarine.
> > > Because Submarine fits into the direction of our big data team's hadoop
> > technology stack,
> > > It avoids the needs to increase the manpower investment in learning
> > other container scheduling systems.
> > > The important thing is that we can use a common YARN cluster to run
> > machine learning,
> > > which makes the utilization of server resources more efficient, and
> > reserves a lot of human and material resources in our previous years.
> > >
> > > Our team have finished the test and deployment of the Submarine and
> will
> > provide the service to our e-commerce department (http://www.kaola.com/)
> > shortly.
> > >
> > > We also plan to provides the Submarine engine in our existing YARN
> > cluster in the next six months.
> > > Because we have a lot of product departments need to use machine
> > learning services,
> > > for example:
> > > 1) Game department (http://game.163.com/) needs AI battle training,
> > > 2) News department (http://www.163.com) needs news recommendation,
> > > 3) Mailbox department (http://www.163.com) requires anti-spam and
> > illegal detection,
> > > 4) Music department (https://music.163.com/) requires music
> > recommendation,
> > > 5) Education department (http://www.youdao.com) requires voice
> > recognition,
> > > 6) Massive Open Online Courses (https://open.163.com/) requires
> > multilingual translation and so on.
> > >
> > > If Submarine can be released independently like Ozone, it will help us
> > quickly get the latest features and improvements, and it will be great
> > helpful to our team and users.
> > >
> > > Thanks hadoop Community!
> > >
> > >
> > > > 在 2019年2月1日,上午2:53,Wangda Tan  写道:
> > > >
> > > > Hi devs,
> > > >
> > > > Since we started submarine-related effort last year, we received a
> lot
> > of
> > > > feedbacks, several companies (such as Netease, China Mobile, etc.)
> are
> > > > trying to deploy Submarine to their Hadoop cluster along with big
> data
> > > > workloads. Linkedin also has big interests to contribute a Submarine
> > TonY (
> > > > https://github.com/linkedin/TonY) runtime to allow users to use the
> > same
> > > > interface.
> > > >
> > > > From what I can see, there're several issues of putting Submarine
> under
> > > > yarn-applications directory and have same release cycle with Hadoop:
> > > >
> > > > 1) We started 3.2.0 release at Sep 2018, but the release is done at
> Jan
> > > > 2019. Because of non-predictable blockers and security issues, it got
> > > > delayed a lot. We need to iterate submarine fast at this point.
> > > >
> > > > 2) We also see a lot of requirements to use Submarine on older Hadoop
> > > > releases such as 2.x. Many companies may not upgrade Hadoop to 3.x
> in a
> > > > short time, but the requirement to run deep learning is urgent to
> > them. We
> > > > should decouple Submarine from Hadoop version.
> > > >
> > > > And why we wanna to keep it within Hadoop? First, Submarine included
> > some
> > > > innovation parts such as enhancements of user experiences for YARN
> > > > services/containerization support which we can add it back to Hadoop
> > later
> > > > to address common requirements. In addition to that, we have a big
> > overlap
> > > > in the community developing and using it.
> > > >
> > > > There're several proposals we have went through during Ozone merge to
> > trunk
> > > > discussion:
> > > >
> >
> https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201803.mbox/%3ccahfhakh6_m3yldf5a2kq8+w-5fbvx5ahfgs-x1vajw8gmnz...@mail.gmail.com%3E
> > > >
> > > > I propose to adopt Ozone model: which is the same master branch,
> > different
> > > > release cycle, and different release branch. It is a great example to
> > show
> > > > agile release we can do (2 Ozone releases after Oct 2018) with less
> > > > overhead to setup CI, projects, etc.
> > > >
> > > > *Links:*
> > > > - JIRA: https://issues.apache.org/jira/browse/YARN-8135
> > > > - Design doc
> > > > <
> >
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit
> > >
> > > > - User doc
> > > > <
> >
> https:/

Re: HADOOP-14163 proposal for new hadoop.apache.org

2018-09-02 Thread John Zhuge
+1 Like the new site.

On Sun, Sep 2, 2018 at 7:02 PM Weiwei Yang  wrote:

> That's really nice, +1.
>
> --
> Weiwei
>
> On Sat, Sep 1, 2018 at 4:36 AM Wangda Tan  wrote:
>
> > +1, thanks for working on this, Marton!
> >
> > Best,
> > Wangda
> >
> > On Fri, Aug 31, 2018 at 11:24 AM Arpit Agarwal  >
> > wrote:
> >
> > > +1
> > >
> > > Thanks for initiating this Marton.
> > >
> > >
> > > On 8/31/18, 1:07 AM, "Elek, Marton"  wrote:
> > >
> > > Bumping this thread at last time.
> > >
> > > I have the following proposal:
> > >
> > > 1. I will request a new git repository hadoop-site.git and import
> the
> > > new site to there (which has exactly the same content as the
> existing
> > > site).
> > >
> > > 2. I will ask infra to use the new repository as the source of
> > > hadoop.apache.org
> > >
> > > 3. I will sync manually all of the changes in the next two months
> > back
> > > to the svn site from the git (release announcements, new
> committers)
> > >
> > > IN CASE OF ANY PROBLEM we can switch back to the svn without any
> > > problem.
> > >
> > > If no-one objects within three days, I'll assume lazy consensus and
> > > start with this plan. Please comment if you have objections.
> > >
> > > Again: it allows immediate fallback at any time as svn repo will be
> > > kept
> > > as is (+ I will keep it up-to-date in the next 2 months)
> > >
> > > Thanks,
> > > Marton
> > >
> > >
> > > On 06/21/2018 09:00 PM, Elek, Marton wrote:
> > > >
> > > > Thank you very much to bump up this thread.
> > > >
> > > >
> > > > About [2]: (Just for the clarification) the content of the
> proposed
> > > > website is exactly the same as the old one.
> > > >
> > > > About [1]. I believe that the "mvn site" is perfect for the
> > > > documentation but for website creation there are more simple and
> > > > powerful tools.
> > > >
> > > > Hugo has more simple compared to jekyll. Just one binary, without
> > > > dependencies, works everywhere (mac, linux, windows)
> > > >
> > > > Hugo has much more powerful compared to "mvn site". Easier to
> > > create/use
> > > > more modern layout/theme, and easier to handle the content (for
> > > example
> > > > new release announcements could be generated as part of the
> release
> > > > process)
> > > >
> > > > I think it's very low risk to try out a new approach for the site
> > > (and
> > > > easy to rollback in case of problems)
> > > >
> > > > Marton
> > > >
> > > > ps: I just updated the patch/preview site with the recent
> releases:
> > > >
> > > > ***
> > > > * http://hadoop.anzix.net *
> > > > ***
> > > >
> > > > On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli wrote:
> > > >> Got pinged about this offline.
> > > >>
> > > >> Thanks for keeping at it, Marton!
> > > >>
> > > >> I think there are two road-blocks here
> > > >>   (1) Is the mechanism using which the website is built good
> > enough
> > > -
> > > >> mvn-site / hugo etc?
> > > >>   (2) Is the new website good enough?
> > > >>
> > > >> For (1), I just think we need more committer attention and get
> > > >> feedback rapidly and get it in.
> > > >>
> > > >> For (2), how about we do it in a different way in the interest
> of
> > > >> progress?
> > > >>   - We create a hadoop.apache.org/new-site/ where this new site
> > > goes.
> > > >>   - We then modify the existing web-site to say that there is a
> > new
> > > >> site/experience that folks can click on a link and navigate to
> > > >>   - As this new website matures and gets feedback & fixes, we
> > > finally
> > > >> pull the plug at a later point of time when we think we are good
> > to
> > > go.
> > > >>
> > > >> Thoughts?
> > > >>
> > > >> +Vinod
> > > >>
> > > >>> On Feb 16, 2018, at 3:10 AM, Elek, Marton 
> > wrote:
> > > >>>
> > > >>> Hi,
> > > >>>
> > > >>> I would like to bump this thread up.
> > > >>>
> > > >>> TLDR; There is a proposed version of a new hadoop site which is
> > > >>> available from here:
> > https://elek.github.io/hadoop-site-proposal/
> > > and
> > > >>> https://issues.apache.org/jira/browse/HADOOP-14163
> > > >>>
> > > >>> Please let me know what you think about it.
> > > >>>
> > > >>>
> > > >>> Longer version:
> > > >>>
> > > >>> This thread started long time ago to use a more modern hadoop
> > site:
> > > >>>
> > > >>> Goals were:
> > > >>>
> > > >>> 1. To make it easier to manage it (the release entries could be
> > > >>> created by a script as part of the release process)
> > > >>> 2. To use a better look-and-feel
> > > >>> 3. Move it out from svn to git
> > > >>>
> > > >>> I proposed to:
> > > >>>
> > > >>> 1. Move 

Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-08 Thread John Zhuge
Thanks Xongjun for the excellent work to drive this release!


+1 (binding)

   - Verified checksums and signatures of tarballs
   - Built source with native, Oracle Java 1.8.0_152 on Mac OS X 10.13.5
   - Verified cloud connectors:
  - ADLS integration tests passed with 1 failure, not a blocker
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - WebHDFS CLI ls and REST LISTSTATUS
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic and servlets
  - Balancer start/stop


ADLS unit test failure:


[ERROR] Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed:
68.889 s <<< FAILURE! - in
org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive

[ERROR]
testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
Time elapsed: 0.851 s  <<< FAILURE!

java.lang.AssertionError: expected:<461> but was:<456>


See https://issues.apache.org/jira/browse/HADOOP-14435. I don't think it is
a blocker.

Thanks,

On Fri, Jun 8, 2018 at 12:04 PM, Xiao Chen  wrote:

> Thanks for the effort on this Yongjun.
>
> +1 (binding)
>
>- Built from src
>- Deployed a pseudo distributed HDFS with KMS
>- Ran basic hdfs commands with encryption
>- Sanity checked webui and logs
>
>
> -Xiao
>
> On Fri, Jun 8, 2018 at 10:34 AM, Brahma Reddy Battula <
> brahmareddy.batt...@hotmail.com> wrote:
>
> > Thanks yongjun zhang for driving this release.
> >
> > +1 (binding).
> >
> >
> > ---Built from the source
> > ---Installed HA cluster
> > ---Execute the basic shell commands
> > ---Browsed the UI's
> > ---Ran sample jobs like pi,wordcount
> >
> >
> >
> > 
> > From: Yongjun Zhang 
> > Sent: Friday, June 8, 2018 1:04 PM
> > To: Allen Wittenauer
> > Cc: Hadoop Common; Hdfs-dev; mapreduce-...@hadoop.apache.org;
> > yarn-dev@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)
> >
> > BTW, thanks Allen and Steve for discussing and suggestion about the site
> > build problem I hit earlier, I did the following step
> >
> > mvn install -DskipTests
> >
> > before doing the steps Nanda listed helped to solve the problems.
> >
> > --Yongjun
> >
> >
> >
> >
> > On Thu, Jun 7, 2018 at 6:15 PM, Yongjun Zhang 
> wrote:
> >
> > > Thank you all very much for the testing, feedback and discussion!
> > >
> > > I was able to build outside docker, by following the steps Nanda
> > > described, I saw the same problem; then I tried 3.0.2 released a while
> > > back, it has the same issue.
> > >
> > > As Allen pointed out, it seems the steps to build site are not
> correct. I
> > > have not figured out the correct steps yet.
> > >
> > > At this point, I think this issue should not block the 3.0.3 issue.
> While
> > > at the same time we need to figure out the right steps to build the
> site.
> > > Would you please let me know if you think differently?
> > >
> > > We only have the site build issue reported so far. And we don't have
> > > enough PMC votes yet. So need some more PMCs to help.
> > >
> > > Thanks again, and best regards,
> > >
> > > --Yongjun
> > >
> > >
> > > On Thu, Jun 7, 2018 at 4:15 PM, Allen Wittenauer <
> > a...@effectivemachines.com
> > > > wrote:
> > >
> > >> > On Jun 7, 2018, at 11:47 AM, Steve Loughran  >
> > >> wrote:
> > >> >
> > >> > Actually, Yongjun has been really good at helping me get set up for
> a
> > >> 2.7.7 release, including "things you need to do to get GPG working in
> > the
> > >> docker image”
> > >>
> > >> *shrugs* I use a different release script after some changes
> > >> broke the in-tree version for building on OS X and I couldn’t get the
> > fixes
> > >> committed upstream.  So not sure what the problems are that you are
> > hitting.
> > >>
> > >> > On Jun 7, 2018, at 1:08 PM, Nandakumar Vadivelu <
> > >> nvadiv...@hortonworks.com> wrote:
> > >> >
> > >> > It will be helpful if we can get the correct steps, and also update
> > the
> > >> wiki.
> > >> > https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+
> > >> Release+Validation
> > >>
> > >> Yup. Looking forward to seeing it.
> > >> -
> > >> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > >> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> > >>
> > >>
> > >
> >
>



-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread John Zhuge
Thanks Andrew for the great effort! Here is my late vote.


+1 (binding)

   - Verified checksums and signatures of tarballs
   - Built source with native, Oracle Java 1.8.0_152 on Mac OS X 10.13.2
   - Verified cloud connectors:
  - S3A integration tests (perf tests skipped)
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic
  - Balancer start/stop


On Wed, Dec 13, 2017 at 6:12 PM, Vinod Kumar Vavilapalli  wrote:

> Yes, JIRAs will be filed, the wiki-page idea from YARN meetup is to record
> all combinations of testing that need to be done and correspondingly
> capture all the testing that someone in the community has already done and
> record it for future perusal.
>
> From what you are saying, I guess we haven't advertised to the public yet
> on rolling upgrades, but in our meetups etc so far, you have been saying
> that rolling upgrades is supported - so I assumed we did put it in our
> messaging.
>
> The important question is if we are or are not allowed to make potentially
> incompatible changes to fix bugs in the process of supporting 2.x to 3.x
> upgrades whether rolling or not.
>
> +Vinod
>
> > On Dec 13, 2017, at 1:05 PM, Andrew Wang 
> wrote:
> >
> > I'm hoping we can address YARN-7588 and any remaining rolling upgrade
> issues in 3.0.x maintenance releases. Beyond a wiki page, it would be
> really great to get JIRAs filed and targeted for tracking as soon as
> possible.
> >
> > Vinod, what do you think we need to do regarding caveating rolling
> upgrade support? We haven't advertised rolling upgrade support between
> major releases outside of dev lists and JIRA. As a new major release, our
> compat guidelines allow us to break compatibility, so I don't think it's
> expected by users.
> >
>
>


-- 
John


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-12 Thread John Zhuge
Oops, the vote was meant for 2.7.5. Sorry for the confusion.

 My 2.8.3 vote coming up shortly.

On Tue, Dec 12, 2017 at 4:28 PM, John Zhuge  wrote:

> Thanks Junping for the great effort!
>
>
>- Verified checksums and signatures of all tarballs
>- Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2
>- Verified cloud connectors:
>   - All S3A integration tests
>- Deployed both binary and built source to a pseudo cluster, passed
>the following sanity tests in insecure and SSL mode:
>   - HDFS basic and ACL
>   - DistCp basic
>   - MapReduce wordcount
>   - KMS and HttpFS basic
>   - Balancer start/stop
>
>
> Non-blockers
>
>- HADOOP-13030 Handle special characters in passwords in KMS startup
>script. Fixed in 2.8+.
>- NameNode servlets test failures: 403 User dr.who is unauthorized to
>access this page. Researching. Could be just test configuration issue.
>
> John
>
> On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger 
> wrote:
>
>> Thanks, Junping
>>
>> +1 (non-binding) looks good from my end
>>
>> - Verified all hashes and checksums
>> - Built from source on macOS 10.12.6, Java 1.8.0u65
>> - Deployed a pseudo cluster
>> - Ran some example jobs
>>
>> Eric
>>
>> On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko <
>> shv.had...@gmail.com>
>> wrote:
>>
>> > Downloaded again, now the checksums look good. Sorry my fault
>> >
>> > Thanks,
>> > --Konstantin
>> >
>> > On Mon, Dec 11, 2017 at 5:03 PM, Junping Du 
>> wrote:
>> >
>> > > Hi Konstantin,
>> > >
>> > >  Thanks for verification and comments. I was verifying your
>> example
>> > > below but found it is actually matched:
>> > >
>> > >
>> > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz*
>> > > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) =
>> > > e53d04477b85e8b58ac0a26468f04736*
>> > >
>> > > What's your md5 checksum for given source tar ball?
>> > >
>> > >
>> > > Thanks,
>> > >
>> > >
>> > > Junping
>> > >
>> > >
>> > > --
>> > > *From:* Konstantin Shvachko 
>> > > *Sent:* Saturday, December 9, 2017 11:06 AM
>> > > *To:* Junping Du
>> > > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
>> > > mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org
>> > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
>> > >
>> > > Hey Junping,
>> > >
>> > > Could you pls upload mds relative to the tar.gz etc. files rather than
>> > > their full path
>> > > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz:
>> > >MD5 = E5 3D 04 47 7B 85 E8 B5  8A C0 A2 64 68 F0 47 36
>> > >
>> > > Otherwise mds don't match for me.
>> > >
>> > > Thanks,
>> > > --Konstantin
>> > >
>> > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du 
>> wrote:
>> > >
>> > >> Hi all,
>> > >>  I've created the first release candidate (RC0) for Apache Hadoop
>> > >> 2.8.3. This is our next maint release to follow up 2.8.2. It
>> includes 79
>> > >> important fixes and improvements.
>> > >>
>> > >>   The RC artifacts are available at:
>> http://home.apache.org/~junpin
>> > >> g_du/hadoop-2.8.3-RC0
>> > >>
>> > >>   The RC tag in git is: release-2.8.3-RC0
>> > >>
>> > >>   The maven artifacts are available via repository.apache.org
>> at:
>> > >> https://repository.apache.org/content/repositories/orgapache
>> hadoop-1072
>> > >>
>> > >>   Please try the release and vote; the vote will run for the
>> usual 5
>> > >> working days, ending on 12/12/2017 PST time.
>> > >>
>> > >> Thanks,
>> > >>
>> > >> Junping
>> > >>
>> > >
>> > >
>> >
>>
>
>
>
> --
> John
>



-- 
John Zhuge


Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-12 Thread John Zhuge
Thanks Konstantin for the great effort!


+1 (binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2
   - Verified cloud connectors:
  - All S3A integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic
  - Balancer start/stop


Non-blockers

   - HADOOP-13030 Handle special characters in passwords in KMS startup
   script. Fixed in 2.8+.
   - NameNode servlets test failures: 403 User dr.who is unauthorized to
   access this page. Researching. Could be just test configuration issue.


On Tue, Dec 12, 2017 at 2:14 PM, Eric Badger 
wrote:

> Thanks, Konstantin. Everything looks good to me
>
> +1 (non-binding)
>
> - Verified all signatures and digests
> - Built from source on macOS 10.12.6, Java 1.8.0u65
> - Deployed a pseudo cluster
> - Ran some example jobs
>
> Eric
>
> On Tue, Dec 12, 2017 at 11:01 AM, Jason Lowe 
> wrote:
>
> > Thanks for driving the release, Konstantin!
> >
> > +1 (binding)
> >
> > - Verified signatures and digests
> > - Successfully performed a native build from source
> > - Deployed a single-node cluster
> > - Ran some sample jobs and checked the logs
> >
> > Jason
> >
> >
> > On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko <
> shv.had...@gmail.com>
> > wrote:
> >
> > > Hi everybody,
> > >
> > > I updated CHANGES.txt and fixed documentation links.
> > > Also committed  MAPREDUCE-6165, which fixes a consistently failing
> test.
> > >
> > > This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
> > > previous one 2.7.4 was release August 4, 2017.
> > > Release 2.7.5 includes critical bug fixes and optimizations. See more
> > > details in Release Note:
> > > http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html
> > >
> > > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/
> > >
> > > Please give it a try and vote on this thread. The vote will run for 5
> > days
> > > ending 12/13/2017.
> > >
> > > My up to date public key is available from:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > Thanks,
> > > --Konstantin
> > >
> >
>



-- 
John


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-12 Thread John Zhuge
Thanks Junping for the great effort!


   - Verified checksums and signatures of all tarballs
   - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2
   - Verified cloud connectors:
  - All S3A integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic
  - Balancer start/stop


Non-blockers

   - HADOOP-13030 Handle special characters in passwords in KMS startup
   script. Fixed in 2.8+.
   - NameNode servlets test failures: 403 User dr.who is unauthorized to
   access this page. Researching. Could be just test configuration issue.

John

On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger 
wrote:

> Thanks, Junping
>
> +1 (non-binding) looks good from my end
>
> - Verified all hashes and checksums
> - Built from source on macOS 10.12.6, Java 1.8.0u65
> - Deployed a pseudo cluster
> - Ran some example jobs
>
> Eric
>
> On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko <
> shv.had...@gmail.com>
> wrote:
>
> > Downloaded again, now the checksums look good. Sorry my fault
> >
> > Thanks,
> > --Konstantin
> >
> > On Mon, Dec 11, 2017 at 5:03 PM, Junping Du  wrote:
> >
> > > Hi Konstantin,
> > >
> > >  Thanks for verification and comments. I was verifying your example
> > > below but found it is actually matched:
> > >
> > >
> > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz*
> > > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) =
> > > e53d04477b85e8b58ac0a26468f04736*
> > >
> > > What's your md5 checksum for given source tar ball?
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Junping
> > >
> > >
> > > --
> > > *From:* Konstantin Shvachko 
> > > *Sent:* Saturday, December 9, 2017 11:06 AM
> > > *To:* Junping Du
> > > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > > mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org
> > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
> > >
> > > Hey Junping,
> > >
> > > Could you pls upload mds relative to the tar.gz etc. files rather than
> > > their full path
> > > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz:
> > >MD5 = E5 3D 04 47 7B 85 E8 B5  8A C0 A2 64 68 F0 47 36
> > >
> > > Otherwise mds don't match for me.
> > >
> > > Thanks,
> > > --Konstantin
> > >
> > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du 
> wrote:
> > >
> > >> Hi all,
> > >>  I've created the first release candidate (RC0) for Apache Hadoop
> > >> 2.8.3. This is our next maint release to follow up 2.8.2. It includes
> 79
> > >> important fixes and improvements.
> > >>
> > >>   The RC artifacts are available at:
> http://home.apache.org/~junpin
> > >> g_du/hadoop-2.8.3-RC0
> > >>
> > >>   The RC tag in git is: release-2.8.3-RC0
> > >>
> > >>   The maven artifacts are available via repository.apache.org at:
> > >> https://repository.apache.org/content/repositories/
> orgapachehadoop-1072
> > >>
> > >>   Please try the release and vote; the vote will run for the
> usual 5
> > >> working days, ending on 12/12/2017 PST time.
> > >>
> > >> Thanks,
> > >>
> > >> Junping
> > >>
> > >
> > >
> >
>



-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-16 Thread John Zhuge
+1 (binding)

   - Verified checksums of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Passed all S3A and ADL integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (skipped in SSL+Kerberos mode)
  - KMS and HttpFS basic
  - Balancer start/stop


On Tue, Nov 14, 2017 at 1:34 PM, Andrew Wang 
wrote:

> Hi folks,
>
> Thanks as always to the many, many contributors who helped with this
> release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
> available here:
>
> http://people.apache.org/~wang/3.0.0-RC0/
>
> This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.
>
> 3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
> additions include the merge of YARN resource types, API-based configuration
> of the CapacityScheduler, and HDFS router-based federation.
>
> I've done my traditional testing with a pseudo cluster and a Pi job. My +1
> to start.
>
> Best,
> Andrew
>



-- 
John


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC1)

2017-11-09 Thread John Zhuge
+1 (binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Verified these cloud connectors:
  - All S3A integration tests
  - All ADL live unit tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (skipped in SSL+Kerberos mode)
  - KMS and HttpFS basic
  - Balancer start/stop


On Thu, Nov 9, 2017 at 5:39 PM, Subru Krishnan  wrote:

> Hi Folks,
>
> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
> starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
> 2.8.2 .
>
> More information about the 2.9.0 release plan can be found here:
> *https://cwiki.apache.org/confluence/display/HADOOP/
> Roadmap#Roadmap-Version2.9
>  Roadmap#Roadmap-Version2.9>*
>
> New RC is available at: http://home.apache.org/~asuresh/hadoop-2.9.0-RC1/
>  2F~asuresh%2Fhadoop-2.9.0-RC1%2F&sa=D&sntz=1&usg=
> AFQjCNE7BF35IDIMZID3hPqiNglWEVsTpg>
>
> The RC tag in git is: release-2.9.0-RC1, and the latest commit id is:
> 7d2ba3e8dd74d2631c51ce6790d59e50eeb7a846.
>
> The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1066
>  apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066&sa=D&
> sntz=1&usg=AFQjCNFcern4uingMV_sEreko_zeLlgdlg>
>
> Please try the release and vote; the vote will run for the usual 5 days,
> ending on Tuesday 14th November 2017 6pm PST time.
>
> Thanks,
> -Subru/Arun
>



-- 
John


Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-23 Thread John Zhuge
 +1 (binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Verified cloud connectors:
  - All S3A integration tests
  - All ADL live unit tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (skipped in SSL+Kerberos mode)
  - KMS and HttpFS basic
  - Balancer start/stop


On Mon, Oct 23, 2017 at 9:03 AM, Ajay Kumar 
wrote:

> Thanks, Junping Du!
>
> +1 (non-binding)
>
> - Built from source
> - Ran hdfs commands
> - Ran pi and sample MR test.
> - Verified the UI's
>
> Thanks,
> Ajay Kumar
>
> On 10/23/17, 8:14 AM, "Shane Kumpf"  wrote:
>
> Thanks, Junping!
>
> +1 (non-binding)
>
> - Verified checksums and signatures
> - Deployed a single node cluster on CentOS 7.2 using the binary tgz,
> source
> tgz, and git tag
> - Ran hdfs commands
> - Ran pi and distributed shell using the default and docker runtimes
> - Verified Docker docs
> - Verified docker runtime can be disabled
> - Verified the UI's
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0-beta1 RC0

2017-10-03 Thread John Zhuge
+1 (binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Verified cloud connectors:
  - All S3A integration tests
  - All ADL live unit tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (only failed in SSL+Kerberos mode for binary
  tarball, probably unrelated)
  - KMS and HttpFS basic
  - Balancer start/stop

Hit the following errors but they don't seem to be blocking:

== Missing dependencies during build ==

> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-azure has missing dependencies: jetty-util-ajax-9.3.19.
> v20170502.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar


Filed HADOOP-14923, HADOOP-14924, and HADOOP-14925.

== Unit tests failed in Kerberos+SSL mode for KMS and HttpFs default HTTP
servlet /conf, /stacks, and /logLevel ==

One example below:

>Connecting to
> https://localhost:14000/logLevel?log=org.apache.hadoop.fs.http.server.HttpFSServer
>Exception in thread "main"
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Authentication failed, URL:
> https://localhost:14000/logLevel?log=org.apache.hadoop.fs.http.server.HttpFSServer&user.name=jzhuge,
> status: 403, message: GSSException: Failure unspecified at GSS-API level
> (Mechanism level: Request is a replay (34))


The /logLevel failure will affect command "hadoop daemonlog".


On Tue, Oct 3, 2017 at 10:56 AM, Andrew Wang 
wrote:

> Thanks for all the votes thus far! We've gotten the binding +1's to close
> the release, though are there contributors who could kick the tires on
> S3Guard and YARN TSv2 alpha2? These are the two new features merged since
> alpha4, so it'd be good to get some coverage.
>
>
>
> On Tue, Oct 3, 2017 at 9:45 AM, Brahma Reddy Battula 
> wrote:
>
> >
> > Thanks Andrew.
> >
> > +1 (non binding)
> >
> > --Built from source
> > --installed 3 node HA cluster
> > --Verified shell commands and UI
> > --Ran wordcount/pic jobs
> >
> >
> >
> >
> > On Fri, 29 Sep 2017 at 5:34 AM, Andrew Wang 
> > wrote:
> >
> >> Hi all,
> >>
> >> Let me start, as always, by thanking the many, many contributors who
> >> helped
> >> with this release! I've prepared an RC0 for 3.0.0-beta1:
> >>
> >> http://home.apache.org/~wang/3.0.0-beta1-RC0/
> >>
> >> This vote will run five days, ending on Nov 3rd at 5PM Pacific.
> >>
> >> beta1 contains 576 fixed JIRA issues comprising a number of bug fixes,
> >> improvements, and feature enhancements. Notable additions include the
> >> addition of YARN Timeline Service v2 alpha2, S3Guard, completion of the
> >> shaded client, and HDFS erasure coding pluggable policy support.
> >>
> >> I've done the traditional testing of running a Pi job on a pseudo
> cluster.
> >> My +1 to start.
> >>
> >> We're working internally on getting this run through our integration
> test
> >> rig. I'm hoping Vijay or Ray can ring in with a +1 once that's complete.
> >>
> >> Best,
> >> Andrew
> >>
> > --
> >
> >
> >
> > --Brahma Reddy Battula
> >
>



-- 
John


[jira] [Created] (YARN-7002) branch-2 build is broken by AllocationFileLoaderService.java

2017-08-11 Thread John Zhuge (JIRA)
John Zhuge created YARN-7002:


 Summary: branch-2 build is broken by 
AllocationFileLoaderService.java
 Key: YARN-7002
 URL: https://issues.apache.org/jira/browse/YARN-7002
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.9.0
Reporter: John Zhuge


branch-2 build is broken:
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-yarn-server-resourcemanager: Compilation failure
[ERROR] 
/Users/jzhuge/hadoop-commit/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java:[270,39]
 incompatible types: java.util.HashSet cannot be converted to 
java.util.Set

{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-07-31 Thread John Zhuge
Just confirmed that HADOOP-13707 does fix the NN servlet issuet in
branch-2.7.

On Mon, Jul 31, 2017 at 11:38 AM, Konstantin Shvachko 
wrote:

> Hi John,
>
> Thank you for checking and voting.
> As far as I know test failures on 2.7.4 are intermittent. We have a jira
> for that
> https://issues.apache.org/jira/browse/HDFS-11985
> but decided it should not block the release.
> The "dr,who" thing is a configuration issue. This page may be helpful:
> http://hadoop.apache.org/docs/stable/hadoop-hdfs-httpfs/ServerSetup.html
>
> Thanks,
> --Konstantin
>
> On Sun, Jul 30, 2017 at 11:24 PM, John Zhuge  wrote:
>
>> Hi Konstantin,
>>
>> Thanks a lot for the effort to prepare the 2.7.4-RC0 release!
>>
>> +1 (non-binding)
>>
>>- Verified checksums and signatures of all tarballs
>>- Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
>>- Verified cloud connectors:
>>   - All S3A integration tests
>>- Deployed both binary and built source to a pseudo cluster, passed
>>the following sanity tests in insecure, SSL, and SSL+Kerberos mode:
>>   - HDFS basic and ACL
>>   - DistCp basic
>>   - MapReduce wordcount (only failed in SSL+Kerberos mode for binary
>>   tarball, probably unrelated)
>>   - KMS and HttpFS basic
>>   - Balancer start/stop
>>
>> Shall we worry this test failures? Likely fixed by
>> https://issues.apache.org/jira/browse/HADOOP-13707.
>>
>>- Got “curl: (22) The requested URL returned error: 403 User dr.who
>>is unauthorized to access this page.” when accessing NameNode web servlet
>>/jmx, /conf, /logLevel, and /stacks. It passed in branch-2.8.
>>
>>
>> On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachko <
>> shv.had...@gmail.com> wrote:
>>
>>> Hi everybody,
>>>
>>> Here is the next release of Apache Hadoop 2.7 line. The previous stable
>>> release 2.7.3 was available since 25 August, 2016.
>>> Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are
>>> critical bug fixes and major optimizations. See more details in Release
>>> Note:
>>> http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html
>>>
>>> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/
>>>
>>> Please give it a try and vote on this thread. The vote will run for 5
>>> days
>>> ending 08/04/2017.
>>>
>>> Please note that my up to date public key are available from:
>>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>>> Please don't forget to refresh the page if you've been there recently.
>>> There are other place on Apache sites, which may contain my outdated key.
>>>
>>> Thanks,
>>> --Konstantin
>>>
>>
>>
>>
>> --
>> John
>>
>
>


-- 
John


Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-07-30 Thread John Zhuge
Hi Konstantin,

Thanks a lot for the effort to prepare the 2.7.4-RC0 release!

+1 (non-binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Verified cloud connectors:
  - All S3A integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (only failed in SSL+Kerberos mode for binary
  tarball, probably unrelated)
  - KMS and HttpFS basic
  - Balancer start/stop

Shall we worry this test failures? Likely fixed by
https://issues.apache.org/jira/browse/HADOOP-13707.

   - Got “curl: (22) The requested URL returned error: 403 User dr.who is
   unauthorized to access this page.” when accessing NameNode web servlet
   /jmx, /conf, /logLevel, and /stacks. It passed in branch-2.8.


On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachko 
wrote:

> Hi everybody,
>
> Here is the next release of Apache Hadoop 2.7 line. The previous stable
> release 2.7.3 was available since 25 August, 2016.
> Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are
> critical bug fixes and major optimizations. See more details in Release
> Note:
> http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 08/04/2017.
>
> Please note that my up to date public key are available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> Please don't forget to refresh the page if you've been there recently.
> There are other place on Apache sites, which may contain my outdated key.
>
> Thanks,
> --Konstantin
>



-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-05 Thread John Zhuge
+1 (non-binding)


   - Verified checksums and signatures of the tarballs
   - Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5
   - Cloud connectors:
  - A few S3A integration tests
  - A few ADL live unit tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - WordCount (skipped in Kerberos mode)
  - KMS and HttpFS basic

Thanks Andrew for the great effort!

On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne 
wrote:

> Thanks Andrew.
> I downloaded the source, built it, and installed it onto a pseudo
> distributed 4-node cluster.
>
> I ran mapred and streaming test cases, including sleep and wordcount.
> +1 (non-binding)
> -Eric
>
>   From: Andrew Wang 
>  To: "common-...@hadoop.apache.org" ; "
> hdfs-...@hadoop.apache.org" ; "
> mapreduce-...@hadoop.apache.org" ; "
> yarn-dev@hadoop.apache.org" 
>  Sent: Thursday, June 29, 2017 9:41 PM
>  Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
>
> Hi all,
>
> As always, thanks to the many, many contributors who helped with this
> release! I've prepared an RC0 for 3.0.0-alpha4:
>
> http://home.apache.org/~wang/3.0.0-alpha4-RC0/
>
> The standard 5-day vote would run until midnight on Tuesday, July 4th.
> Given that July 4th is a holiday in the US, I expect this vote might have
> to be extended, but I'd like to close the vote relatively soon after.
>
> I've done my traditional testing of a pseudo-distributed cluster with a
> single task pi job, which was successful.
>
> Normally my testing would end there, but I'm slightly more confident this
> time. At Cloudera, we've successfully packaged and deployed a snapshot from
> a few days ago, and run basic smoke tests. Some bugs found from this
> include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and
> the revert of HDFS-11696, which broke NN QJM HA setup.
>
> Vijay is working on a test run with a fuller test suite (the results of
> which we can hopefully post soon).
>
> My +1 to start,
>
> Best,
> Andrew
>
>
>
>



-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-05 Thread John Zhuge
False alarm, fixed the build issue with "mvn -U clean install".

On Wed, Jul 5, 2017 at 6:08 PM, John Zhuge  wrote:

> For some reason, I can't build the source.
>
> Got this when running "mvn install -U" inside directory
> "hadoop-maven-plugins":
>
> [ERROR] Failed to execute goal org.apache.maven.plugins:
> maven-remote-resources-plugin:1.5:process (default) on project
> hadoop-maven-plugins: Execution default of goal org.apache.maven.plugins:
> maven-remote-resources-plugin:1.5:process failed: Plugin
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of its
> dependencies could not be resolved: Failed to collect dependencies at
> org.apache.maven.plugins:maven-remote-resources-plugin:jar:1.5 ->
> org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha4: Failed to read
> artifact descriptor for org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha4:
> Failure to find org.apache.hadoop:hadoop-main:pom:3.0.0-alpha4 in
> https://repo.maven.apache.org/maven2 was cached in the local repository,
> resolution will not be reattempted until the update interval of central has
> elapsed or updates are forced -> [Help 1]
>
>
> On Thu, Jun 29, 2017 at 7:40 PM, Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> As always, thanks to the many, many contributors who helped with this
>> release! I've prepared an RC0 for 3.0.0-alpha4:
>>
>> http://home.apache.org/~wang/3.0.0-alpha4-RC0/
>>
>> The standard 5-day vote would run until midnight on Tuesday, July 4th.
>> Given that July 4th is a holiday in the US, I expect this vote might have
>> to be extended, but I'd like to close the vote relatively soon after.
>>
>> I've done my traditional testing of a pseudo-distributed cluster with a
>> single task pi job, which was successful.
>>
>> Normally my testing would end there, but I'm slightly more confident this
>> time. At Cloudera, we've successfully packaged and deployed a snapshot
>> from
>> a few days ago, and run basic smoke tests. Some bugs found from this
>> include HDFS-11956, which fixes backwards compat with Hadoop 2 clients,
>> and
>> the revert of HDFS-11696, which broke NN QJM HA setup.
>>
>> Vijay is working on a test run with a fuller test suite (the results of
>> which we can hopefully post soon).
>>
>> My +1 to start,
>>
>> Best,
>> Andrew
>>
>
>
>
> --
> John
>



-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-05 Thread John Zhuge
For some reason, I can't build the source.

Got this when running "mvn install -U" inside directory
"hadoop-maven-plugins":

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process
(default) on project hadoop-maven-plugins: Execution default of goal
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed:
Plugin org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of
its dependencies could not be resolved: Failed to collect dependencies at
org.apache.maven.plugins:maven-remote-resources-plugin:jar:1.5 ->
org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha4: Failed to read
artifact descriptor for
org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha4: Failure to find
org.apache.hadoop:hadoop-main:pom:3.0.0-alpha4 in
https://repo.maven.apache.org/maven2 was cached in the local repository,
resolution will not be reattempted until the update interval of central has
elapsed or updates are forced -> [Help 1]


On Thu, Jun 29, 2017 at 7:40 PM, Andrew Wang 
wrote:

> Hi all,
>
> As always, thanks to the many, many contributors who helped with this
> release! I've prepared an RC0 for 3.0.0-alpha4:
>
> http://home.apache.org/~wang/3.0.0-alpha4-RC0/
>
> The standard 5-day vote would run until midnight on Tuesday, July 4th.
> Given that July 4th is a holiday in the US, I expect this vote might have
> to be extended, but I'd like to close the vote relatively soon after.
>
> I've done my traditional testing of a pseudo-distributed cluster with a
> single task pi job, which was successful.
>
> Normally my testing would end there, but I'm slightly more confident this
> time. At Cloudera, we've successfully packaged and deployed a snapshot from
> a few days ago, and run basic smoke tests. Some bugs found from this
> include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and
> the revert of HDFS-11696, which broke NN QJM HA setup.
>
> Vijay is working on a test run with a fuller test suite (the results of
> which we can hopefully post soon).
>
> My +1 to start,
>
> Best,
> Andrew
>



-- 
John


[jira] [Created] (YARN-6501) FSSchedulerNode.java failed to build with JDK7

2017-04-19 Thread John Zhuge (JIRA)
John Zhuge created YARN-6501:


 Summary: FSSchedulerNode.java failed to build with JDK7
 Key: YARN-6501
 URL: https://issues.apache.org/jira/browse/YARN-6501
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.9.0
Reporter: John Zhuge


{noformat}
[ERROR] 
/Users/jzhuge/hadoop-commit/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSSchedulerNode.java:[183,18]
 cannot find symbol
[ERROR] symbol:   method 
putIfAbsent(org.apache.hadoop.yarn.api.records.ApplicationAttemptId,org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt)
[ERROR] location: variable appIdToAppMap of type 
java.util.Map
[ERROR] 
/Users/jzhuge/hadoop-commit/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSSchedulerNode.java:[184,29]
 cannot find symbol
[ERROR] symbol:   method 
putIfAbsent(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt,org.apache.hadoop.yarn.api.records.Resource)
[ERROR] location: variable resourcesPreemptedForApp of type 
java.util.Map
{noformat}

{{Map#putIfAbsent}} is introduced in JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-21 Thread John Zhuge
+1. Thanks for the great effort, Junping!


   - Verified checksums and signatures of the tarballs
   - Built source code with Java 1.8.0_66-b17 on Mac OS X 10.12.3
   - Built source and native code with Java 1.8.0_111 on Centos 7.2.1511
   - Cloud connectors:
  - s3a: integration tests, basic fs commands
  - adl: live unit tests, basic fs commands. See notes below.
   - Deployed a pseudo cluster, passed the following sanity tests in
   both insecure and SSL mode:
  - HDFS: basic dfs, distcp, ACL commands
  - KMS and HttpFS: basic tests
  - MapReduce wordcount
  - balancer start/stop


Needs the following JIRAs to pass all ADL tests:

   - HADOOP-14205. No FileSystem for scheme: adl. Contributed by John Zhuge.
   - HDFS-11132. Allow AccessControlException in contract tests when
   getFileStatus on subdirectory of existing files. Contributed by Vishwajeet
   Dusane
   - HADOOP-13928. TestAdlFileContextMainOperationsLive.testGetFileContext1
   runtime error. (John Zhuge via lei)


On Mon, Mar 20, 2017 at 10:31 AM, John Zhuge  wrote:

> Yes, it only affects ADL. There is a workaround of adding these 2
> properties to core-site.xml:
>
>   
> fs.adl.impl
> org.apache.hadoop.fs.adl.AdlFileSystem
>   
>
>   
> fs.AbstractFileSystem.adl.impl
> org.apache.hadoop.fs.adl.Adl
>   
>
> I have the initial patch ready but hitting these live unit test failures:
>
> Failed tests:
>
> TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> testListStatus:257
> expected:<1> but was:<10>
>
> Tests in error:
>
> TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseTest.
> testMkdirsFailsForSubdirectoryOfExistingFile:254
> » AccessControl
>
> TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> testMkdirsFailsForSubdirectoryOfExistingFile:190
> » AccessControl
>
>
> Stay tuned...
>
> John Zhuge
> Software Engineer, Cloudera
>
> On Mon, Mar 20, 2017 at 10:02 AM, Junping Du  wrote:
>
> > Thank you for reporting the issue, John! Does this issue only affect ADL
> > (Azure Data Lake) which is a new feature for 2.8 rather than other
> existing
> > FS? If so, I think we can leave the fix to 2.8.1 to fix given this is
> not a
> > regression and just a new feature get broken.​
> >
> >
> > Thanks,
> >
> >
> > Junping
> > --
> > *From:* John Zhuge 
> > *Sent:* Monday, March 20, 2017 9:07 AM
> > *To:* Junping Du
> > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > yarn-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> >
> > Discovered https://issues.apache.org/jira/browse/HADOOP-14205 "No
> > FileSystem for scheme: adl".
> >
> > The issue were caused by backporting HADOOP-13037 to branch-2 and
> earlier.
> > HADOOP-12666 should not be backported, but some changes are needed:
> > property fs.adl.impl in core-default.xml and hadoop-tools-dist/pom.xml.
> >
> > I am working on a patch.
> >
> >
> > John Zhuge
> > Software Engineer, Cloudera
> >
> > On Fri, Mar 17, 2017 at 2:18 AM, Junping Du  wrote:
> >
> >> Hi all,
> >>  With fix of HDFS-11431 get in, I've created a new release candidate
> >> (RC3) for Apache Hadoop 2.8.0.
> >>
> >>  This is the next minor release to follow up 2.7.0 which has been
> >> released for more than 1 year. It comprises 2,900+ fixes, improvements,
> and
> >> new features. Most of these commits are released for the first time in
> >> branch-2.
> >>
> >>   More information about the 2.8.0 release plan can be found here:
> >> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
> >>
> >>   New RC is available at: http://home.apache.org/~junpin
> >> g_du/hadoop-2.8.0-RC3
> >>
> >>   The RC tag in git is: release-2.8.0-RC3, and the latest commit id
> >> is: 91f2b7a13d1e97be65db92ddabc627cc29ac0009
> >>
> >>   The maven artifacts are available via repository.apache.org at:
> >> https://repository.apache.org/content/repositories/orgapachehadoop-1057
> >>
> >>   Please try the release and vote; the vote will run for the usual 5
> >> days, ending on 03/22/2017 PDT time.
> >>
> >> Thanks,
> >>
> >> Junping
> >>
> >
> >
>



-- 
John


[jira] [Resolved] (YARN-5054) Remove redundent TestMiniDFSCluster.testDualClusters

2016-05-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved YARN-5054.
--
Resolution: Invalid

Move to HDFS project.

> Remove redundent TestMiniDFSCluster.testDualClusters
> 
>
> Key: YARN-5054
> URL: https://issues.apache.org/jira/browse/YARN-5054
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.6.0
>    Reporter: John Zhuge
>Priority: Trivial
>  Labels: newbie
>
> Unit test {{TestMiniDFSCluster.testDualClusters}} is redundant because 
> {{testClusterWithoutSystemProperties}} already proves 
> {{cluster.getDataDirectory() == getProp(HDFS_MINIDFS_BASEDIR) + "/data"}}. 
> This unit test sets HDFS_MINIDFS_BASEDIR to 2 different values and brings up 
> 2 clusters, of course they will have different data directory.
> Remove it to save the time to bring up 2 mini clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5054) Remove redundent TestMiniDFSCluster.testDualClusters

2016-05-06 Thread John Zhuge (JIRA)
John Zhuge created YARN-5054:


 Summary: Remove redundent TestMiniDFSCluster.testDualClusters
 Key: YARN-5054
 URL: https://issues.apache.org/jira/browse/YARN-5054
 Project: Hadoop YARN
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: John Zhuge
Priority: Trivial


Unit test {{TestMiniDFSCluster.testDualClusters}} is redundant because 
{{testClusterWithoutSystemProperties}} already proves 
{{cluster.getDataDirectory() == getProp(HDFS_MINIDFS_BASEDIR) + "/data"}}. This 
unit test sets HDFS_MINIDFS_BASEDIR to 2 different values and brings up 2 
clusters, of course they will have different data directory.

Remove it to save the time to bring up 2 mini clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-4959) MiniYARNCluster should implement AutoCloseable

2016-04-25 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved YARN-4959.
--
   Resolution: Not A Problem
Fix Version/s: 2.7.0

Already supported because:
* {{MiniYARNCluster}} inherits from {{CompositeService}} => {{AbstractService}} 
=> {{Service}} => {{Closeable}} => {{AutoCloseable}}.
* {{AbstractService.close}} calls {{stop}} which calls {{serviceStop}}.
* {{MiniYARNCluster}} performs cleanup in its overridden {{serviceStop}}

Thanks, [~boky01], for point it out in a comment for HDFS-10287.

> MiniYARNCluster should implement AutoCloseable
> --
>
> Key: YARN-4959
> URL: https://issues.apache.org/jira/browse/YARN-4959
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Priority: Trivial
> Fix For: 2.7.0
>
>
> {{MiniYARNCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)