[jira] [Resolved] (HADOOP-18132) S3 exponential backoff

2022-02-20 Thread John Zhuge (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-18132.
-
Resolution: Not A Problem

S3A already performs retries on S3 errors. For details, please check out 
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Retry_and_Recovery.

> S3 exponential backoff
> --
>
> Key: HADOOP-18132
> URL: https://issues.apache.org/jira/browse/HADOOP-18132
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Holden Karau
>Priority: Major
>
> S3 API has limits which we can exceed when using a large number of 
> writers/readers/or listers. We should add randomized-exponential back-off to 
> the s3 client when it encounters:
>  
> com.amazonaws.services.s3.model.AmazonS3Exception: Please reduce your request 
> rate. (Service: Amazon S3; Status Code: 503; Error Code: SlowDown; 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Change project style guidelines to allow line length 100

2021-05-20 Thread John Zhuge
5a8cd1100580f27dc%401430849118%40%3Ccommon-dev.hadoop.apache.org%3E>
> > > > > > <
> > > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/3e1785cbbe14dcab9bb970fa0f534811cfe00795a8cd1100580f27dc%401430849118%40%3Ccommon-dev.hadoop.apache.org%3E>
> > > > > > >>;
> > > > > > >> >
> > > > > > >> > > Thanks,
> > > > > > >> > > Akira
> > > > > > >> > >
> > > > > > >> > > On Thu, May 20, 2021 at 6:36 AM Sean Busbey
> > > > > > >> mailto:sbus...@apple.com.invalid
> > >>
> > > > > > >> > wrote:
> > > > > > >> > >>
> > > > > > >> > >> Hello!
> > > > > > >> > >>
> > > > > > >> > >> What do folks think about changing our line
> length
> > > > > > >> guidelines to allow
> > > > > > >> > for 100 character width?
> > > > > > >> > >>
> > > > > > >> > >> Currently, we tell folks to follow the sun style
> > > guide
> > > > > > with
> > > > > > >> some
> > > > > > >> > exception unrelated to line length. That guide says width
> > of
> > > 80
> > > > > is
> > > > > > the
> > > > > > >> > standard and our current check style rules act as
> > > enforcement.
> > > > > > >> > >>
> > > > > > >> > >> Looking at the current trunk codebase our
> nightly
> > > > build
> > > > > > >> shows a total
> > > > > > >> > of ~15k line length violations; it’s about 18% of
> > identified
> > > > > > >> checkstyle
> > > > > > >> > issues.
> > > > > > >> > >>
> > > > > > >> > >> The vast majority of those line length
> violations
> > > are
> > > > <=
> > > > > > 100
> > > > > > >> characters
> > > > > > >> > long. 100 characters happens to be the length for the
> > Google
> > > > Java
> > > > > > >> Style
> > > > > > >> > Guide, another commonly adopted style guide for java
> > > projects,
> > > > > so I
> > > > > > >> suspect
> > > > > > >> > these longer lines leaking past the checkstyle precommit
> > > > warning
> > > > > > >> might be a
> > > > > > >> > reflection of committers working across multiple java
> > > > codebases.
> > > > > > >> > >>
> > > > > > >> > >> I don’t feel strongly about lines being longer,
> > but
> > > I
> > > > > > would
> > > > > > >> like to
> > > > > > >> > move towards more consistent style enforcement as a
> > project.
> > > > > > Updating
> > > > > > >> our
> > > > > > >> > project guidance to allow for 100 character lines would
> > > reduce
> > > > > the
> > > > > > >> > likelihood that folks bringing in new contributions need
> a
> > > > > > precommit
> > > > > > >> test
> > > > > > >> > cycle to get the formatting correct.
> > > > > > >> > >>
> > > > > > >> > >> Does anyone feel strongly about keeping the line
> > > > length
> > > > > > >> limit at 80
> > > > > > >> > characters?
> > > > > > >> > >>
> > > > > > >> > >> Does anyone feel strongly about contributions
> > coming
> > > > in
> > > > > > that
> > > > > > >> clear up
> > > > > > >> > line length violations?
> > > > > > >> > >>
> > > > > > >> > >>
> > > > > > >> > >>
> > > > > > >>
> > > > -
> > > > > > >> > >> To unsubscribe, e-mail:
> > > > > > >> common-dev-unsubscr...@hadoop.apache.org  > > > > > common-dev-unsubscr...@hadoop.apache.org>
> > > > > > >> > >> For additional commands, e-mail:
> > > > > > >> common-dev-h...@hadoop.apache.org  > > > > > common-dev-h...@hadoop.apache.org>
> > > > > > >> > >>
> > > > > > >> > >
> > > > > > >> > >
> > > > > > >>
> > > > -
> > > > > > >> > > To unsubscribe, e-mail:
> > > > > > common-dev-unsubscr...@hadoop.apache.org  > > > > > common-dev-unsubscr...@hadoop.apache.org>
> > > > > > >> > > For additional commands, e-mail:
> > > > > > >> common-dev-h...@hadoop.apache.org  > > > > > common-dev-h...@hadoop.apache.org>
> > > > > > >> > >
> > > > > > >> >
> > > > > > >> >
> > > > > >
> > -
> > > > > > >> > To unsubscribe, e-mail:
> > > > common-dev-unsubscr...@hadoop.apache.org
> > > > > > <mailto:common-dev-unsubscr...@hadoop.apache.org>
> > > > > > >> > For additional commands, e-mail:
> > > > > common-dev-h...@hadoop.apache.org
> > > > > > <mailto:common-dev-h...@hadoop.apache.org>
> > > > > > >> >
> > > > > > >> >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
-- 
John Zhuge


Re: [VOTE] End of Life Hadoop 2.9

2020-09-02 Thread John Zhuge
+1 (binding)

On Wed, Sep 2, 2020 at 7:51 AM Steve Loughran 
wrote:

> +1
>  binding
>
> On Mon, 31 Aug 2020 at 20:09, Wei-Chiu Chuang  wrote:
>
> > Dear fellow Hadoop developers,
> >
> > Given the overwhelming feedback from the discussion thread
> > https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> > thread for the community to vote and start the 2.9 EOL process.
> >
> > What this entails:
> >
> > (1) an official announcement that no further regular Hadoop 2.9.x
> releases
> > will be made after 2.9.2 (which was GA on 11/19/2019)
> > (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
> >
> >
> > This vote will run for 7 days and will conclude by September 7th, 12:00pm
> > pacific time.
> > Committers are eligible to cast binding votes. Non-committers are
> welcomed
> > to cast non-binding votes.
> >
> > Here is my vote, +1
> >
>


-- 
John Zhuge


Re: [DISCUSS] fate of branch-2.9

2020-08-27 Thread John Zhuge
+1

On Thu, Aug 27, 2020 at 6:01 AM Ayush Saxena  wrote:

> +1
>
> -Ayush
>
> > On 27-Aug-2020, at 6:24 PM, Steve Loughran 
> wrote:
> >
> > 
> >
> > +1
> >
> > are there any Hadoop branch-2 releases planned, ever? If so I'll need to
> backport my s3a directory compatibility patch to whatever is still live.
> >
> >
> >> On Thu, 27 Aug 2020 at 06:55, Wei-Chiu Chuang 
> wrote:
> >> Bump up this thread after 6 months.
> >>
> >> Is anyone still interested in the 2.9 release line? Or are we good to
> start
> >> the EOL process? The 2.9.2 was released in Nov 2018.
> >>
> >> I'd really like to see the community to converge to fewer release lines
> and
> >> make more frequent releases in each line.
> >>
> >> Thanks,
> >> Weichiu
> >>
> >>
> >> On Fri, Mar 6, 2020 at 5:47 PM Wei-Chiu Chuang 
> wrote:
> >>
> >> > I think that's a great suggestion.
> >> > Currently, we make 1 minor release per year, and within each minor
> release
> >> > we bring up 1 thousand to 2 thousand commits in it compared with the
> >> > previous one.
> >> > I can totally understand it is a big bite for users to swallow.
> Having a
> >> > more frequent release cycle, plus LTS and non-LTS releases should
> help with
> >> > this. (Of course we will need to make the release preparation much
> easier,
> >> > which is currently a pain)
> >> >
> >> > I am happy to discuss the release model further in the dev ML. LTS
> v.s.
> >> > non-LTS is one suggestion.
> >> >
> >> > Another similar issue: In the past Hadoop strived to
> >> > maintain compatibility. However, this is no longer sustainable as
> more CVEs
> >> > coming from our dependencies: netty, jetty, jackson ... etc.
> >> > In many cases, updating the dependencies brings breaking changes. More
> >> > recently, especially in Hadoop 3.x, I started to make the effort to
> update
> >> > dependencies much more frequently. How do users feel about this
> change?
> >> >
> >> > On Thu, Mar 5, 2020 at 7:58 AM Igor Dvorzhak 
> >> > wrote:
> >> >
> >> >> Maybe Hadoop will benefit from adopting a similar release and support
> >> >> strategy as Java? I.e. designate some releases as LTS and support
> them for
> >> >> 2 (?) years (it seems that 2.7.x branch was de-facto LTS), other
> non-LTS
> >> >> releases will be supported for 6 months (or until next release). This
> >> >> should allow to reduce maintenance cost of non-LTS release and
> provide
> >> >> conservative users desired stability by allowing them to wait for
> new LTS
> >> >> release and upgrading to it.
> >> >>
> >> >> On Thu, Mar 5, 2020 at 1:26 AM Rupert Mazzucco <
> rupert.mazzu...@gmail.com>
> >> >> wrote:
> >> >>
> >> >>> After recently jumping from 2.7.7 to 2.10 without issue myself, I
> vote
> >> >>> for keeping only the 2.10 line.
> >> >>> It would seem all other 2.x branches can upgrade to a 2.10.x easily
> if
> >> >>> they feel like upgrading at all,
> >> >>> unlike a jump to 3.x, which may require more planning.
> >> >>>
> >> >>> I also vote for having only one main 3.x branch. Why are there
> 3.1.x and
> >> >>> 3.2.x seemingly competing,
> >> >>> and now 3.3.x? For a community that does not have the resources to
> >> >>> manage multiple release lines,
> >> >>> you guys sure like to multiply release lines a lot.
> >> >>>
> >> >>> Cheers
> >> >>> Rupert
> >> >>>
> >> >>> Am Mi., 4. März 2020 um 19:40 Uhr schrieb Wei-Chiu Chuang
> >> >>> :
> >> >>>
> >> >>>> Forwarding the discussion thread from the dev mailing lists to the
> user
> >> >>>> mailing lists.
> >> >>>>
> >> >>>> I'd like to get an idea of how many users are still on Hadoop 2.9.
> >> >>>> Please share your thoughts.
> >> >>>>
> >> >>>> On Mon, Mar 2, 2020 at 6:30 PM Sree Vaddi
> >> >>>>  wrote:
> >> >>>>
> >> >>>>> +1
> >> >>>>>
> >> >>>>> Sent from Yahoo Mail on Android
> >> >>>>>
> >> >>>>>   On Mon, Mar 2, 2020 at 5:12 PM, Wei-Chiu Chuang<
> weic...@apache.org>
> >> >>>>> wrote:   Hi,
> >> >>>>>
> >> >>>>> Following the discussion to end branch-2.8, I want to start a
> >> >>>>> discussion
> >> >>>>> around what's next with branch-2.9. I am hesitant to use the word
> "end
> >> >>>>> of
> >> >>>>> life" but consider these facts:
> >> >>>>>
> >> >>>>> * 2.9.0 was released Dec 17, 2017.
> >> >>>>> * 2.9.2, the last 2.9.x release, went out Nov 19 2018, which is
> more
> >> >>>>> than
> >> >>>>> 15 months ago.
> >> >>>>> * no one seems to be interested in being the release manager for
> 2.9.3.
> >> >>>>> * Most if not all of the active Hadoop contributors are using
> Hadoop
> >> >>>>> 2.10
> >> >>>>> or Hadoop 3.x.
> >> >>>>> * We as a community do not have the cycle to manage multiple
> release
> >> >>>>> line,
> >> >>>>> especially since Hadoop 3.3.0 is coming out soon.
> >> >>>>>
> >> >>>>> It is perhaps the time to gradually reduce our footprint in Hadoop
> >> >>>>> 2.x, and
> >> >>>>> encourage people to upgrade to Hadoop 3.x
> >> >>>>>
> >> >>>>> Thoughts?
> >> >>>>>
> >> >>>>>
>


-- 
John Zhuge


Re: [VOTE] EOL Hadoop branch-2.8

2020-03-08 Thread John Zhuge
+1

On Tue, Mar 3, 2020 at 7:48 PM Bharat Viswanadham  wrote:

> +1
>
> Thanks,
> Bharat
>
> On Tue, Mar 3, 2020 at 7:46 PM Zhankun Tang  wrote:
>
> > Thanks, Wei-Chiu. +1.
> >
> > BR,
> > Zhankun
> >
> > On Wed, 4 Mar 2020 at 08:03, Wilfred Spiegelenburg
> >  wrote:
> >
> > > +1
> > >
> > > Wilfred
> > >
> > > > On 3 Mar 2020, at 05:48, Wei-Chiu Chuang  wrote:
> > > >
> > > > I am sorry I forgot to start a VOTE thread.
> > > >
> > > > This is the "official" vote thread to mark branch-2.8 End of Life.
> This
> > > is
> > > > based on the following thread and the tracking jira (HADOOP-16880
> > > > <https://issues.apache.org/jira/browse/HADOOP-16880>).
> > > >
> > > > This vote will run for 7 days and conclude on March 9th (Mon) 11am
> PST.
> > > >
> > > > Please feel free to share your thoughts.
> > > >
> > > > Thanks,
> > > > Weichiu
> > > >
> > > > On Mon, Feb 24, 2020 at 10:28 AM Wei-Chiu Chuang <
> weic...@cloudera.com
> > >
> > > > wrote:
> > > >
> > > >> Looking at the EOL policy wiki:
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches
> > > >>
> > > >> The Hadoop community can still elect to make security update for
> > EOL'ed
> > > >> releases.
> > > >>
> > > >> I think the EOL is to give more clarity to downstream applications
> > (such
> > > >> as HBase) the guidance of which Hadoop release lines are still
> active.
> > > >> Additionally, I don't think it is sustainable to maintain 6
> concurrent
> > > >> release lines in this big project, which is why I wanted to start
> this
> > > >> discussion.
> > > >>
> > > >> Thoughts?
> > > >>
> > > >> On Mon, Feb 24, 2020 at 10:22 AM Sunil Govindan 
> > > wrote:
> > > >>
> > > >>> Hi Wei-Chiu
> > > >>>
> > > >>> Extremely sorry for the late reply here.
> > > >>> Cud u pls help to add more clarity on defining what will happen for
> > > >>> branch-2.8 when we call EOL.
> > > >>> Does this mean that, no more release coming out from this branch,
> or
> > > some
> > > >>> more additional guidelines?
> > > >>>
> > > >>> - Sunil
> > > >>>
> > > >>>
> > > >>> On Mon, Feb 24, 2020 at 11:47 PM Wei-Chiu Chuang
> > > >>>  wrote:
> > > >>>
> > > >>>> This thread has been running for 7 days and no -1.
> > > >>>>
> > > >>>> Don't think we've established a formal EOL process, but to
> publicize
> > > the
> > > >>>> EOL, I am going to file a jira, update the wiki and post the
> > > >>> announcement
> > > >>>> to general@ and user@
> > > >>>>
> > > >>>> On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia <
> > > >>> dineshc@gmail.com>
> > > >>>> wrote:
> > > >>>>
> > > >>>>> Thanks Wei-Chiu for initiating this.
> > > >>>>>
> > > >>>>> +1 for 2.8 EOL.
> > > >>>>>
> > > >>>>> On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka <
> > aajis...@apache.org>
> > > >>>>> wrote:
> > > >>>>>
> > > >>>>>> Thanks Wei-Chiu for starting the discussion,
> > > >>>>>>
> > > >>>>>> +1 for the EoL.
> > > >>>>>>
> > > >>>>>> -Akira
> > > >>>>>>
> > > >>>>>> On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena <
> ayush...@gmail.com>
> > > >>>> wrote:
> > > >>>>>>
> > > >>>>>>> Thanx Wei-Chiu for initiating this
> > > >>>>>>> +1 for marking 2.8 EOL
> > > >>>>>>>
> > > >>>>>>> -Ayush
> > > >>>>>>>
> > > >>>>>>>> On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang <
> > > >>> weic...@apache.org>
> > > >>>>>> wrote:
> > > >>>>>>>>
> > > >>>>>>>> The last Hadoop 2.8.x release, 2.8.5, was GA on September
> 15th
> > > >>>> 2018.
> > > >>>>>>>>
> > > >>>>>>>> It's been 17 months since the release and the community by and
> > > >>>> large
> > > >>>>>> have
> > > >>>>>>>> moved up to 2.9/2.10/3.x.
> > > >>>>>>>>
> > > >>>>>>>> With Hadoop 3.3.0 over the horizon, is it time to start the
> EOL
> > > >>>>>>> discussion
> > > >>>>>>>> and reduce the number of active branches?
> > > >>>>>>>
> > > >>>>>>>
> > > >>>
> -
> > > >>>>>>> To unsubscribe, e-mail:
> common-dev-unsubscr...@hadoop.apache.org
> > > >>>>>>> For additional commands, e-mail:
> > > >>> common-dev-h...@hadoop.apache.org
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>
> > > >>>>>
> > > >>>>
> > > >>>
> > > >>
> > >
> > > Wilfred Spiegelenburg
> > > Staff Software Engineer
> > >  <https://www.cloudera.com/>
> > >
> >
>
-- 
John Zhuge


Re: [ANNOUNCE] New Apache Hadoop Committer - Stephen O'Donnell

2020-03-04 Thread John Zhuge
Congratulations Stephen!!

On Tue, Mar 3, 2020 at 7:45 PM Zhankun Tang  wrote:

> Congrats! Stephen!
>
> BR,
> Zhankun
>
> On Wed, 4 Mar 2020 at 04:27, Ayush Saxena  wrote:
>
> > Congratulations Stephen!!!
> >
> > -Ayush
> >
> > > On 04-Mar-2020, at 1:51 AM, Bharat Viswanadham 
> > wrote:
> > >
> > > Congratulations Stephen!
> > >
> > > Thanks,
> > > Bharat
> > >
> > >
> > >> On Tue, Mar 3, 2020 at 12:12 PM Wei-Chiu Chuang 
> > wrote:
> > >>
> > >> In bcc: general@
> > >>
> > >> It's my pleasure to announce that Stephen O'Donnell has been elected
> as
> > >> committer on the Apache Hadoop project recognizing his continued
> > >> contributions to the
> > >> project.
> > >>
> > >> Please join me in congratulating him.
> > >>
> > >> Hearty Congratulations & Welcome aboard Stephen!
> > >>
> > >> Wei-Chiu Chuang
> > >> (On behalf of the Hadoop PMC)
> > >>
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >
> >
>


-- 
John Zhuge


Re: [jira] [Created] (HADOOP-16219) Hadoop branch-2 to set java language version to 1.8

2019-03-28 Thread John Zhuge
+1

On Thu, Mar 28, 2019 at 12:33 PM Steve Loughran (JIRA) 
wrote:

> Steve Loughran created HADOOP-16219:
> ---
>
>  Summary: Hadoop branch-2 to set java language version to 1.8
>  Key: HADOOP-16219
>  URL: https://issues.apache.org/jira/browse/HADOOP-16219
>  Project: Hadoop Common
>   Issue Type: Improvement
>   Components: build
> Affects Versions: 2.10.0
> Reporter: Steve Loughran
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the
> release process a pain (we aren't building, testing, or releasing on java 7
> JVMs any more, are we?).
>
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello
> Guava!) &c are becoming impossible.
>
> Proposed: increment javac.version = 1.8
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>

-- 
John


Re: [DISCUSS] Making submarine to different release model like Ozone

2019-02-01 Thread John Zhuge
+1

Does Submarine support Jupyter?

On Fri, Feb 1, 2019 at 8:54 AM Zhe Zhang  wrote:

> +1 on the proposal and looking forward to the progress of the project!
>
> On Thu, Jan 31, 2019 at 10:51 PM Weiwei Yang  wrote:
>
> > Thanks for proposing this Wangda, my +1 as well.
> > It is amazing to see the progress made in Submarine last year, the
> > community grows fast and quiet collaborative. I can see the reasons to
> get
> > it release faster in its own cycle. And at the same time, the Ozone way
> > works very well.
> >
> > —
> > Weiwei
> > On Feb 1, 2019, 10:49 AM +0800, Xun Liu , wrote:
> > > +1
> > >
> > > Hello everyone,
> > >
> > > I am Xun Liu, the head of the machine learning team at Netease Research
> > Institute. I quite agree with Wangda.
> > >
> > > Our team is very grateful for getting Submarine machine learning engine
> > from the community.
> > > We are heavy users of Submarine.
> > > Because Submarine fits into the direction of our big data team's hadoop
> > technology stack,
> > > It avoids the needs to increase the manpower investment in learning
> > other container scheduling systems.
> > > The important thing is that we can use a common YARN cluster to run
> > machine learning,
> > > which makes the utilization of server resources more efficient, and
> > reserves a lot of human and material resources in our previous years.
> > >
> > > Our team have finished the test and deployment of the Submarine and
> will
> > provide the service to our e-commerce department (http://www.kaola.com/)
> > shortly.
> > >
> > > We also plan to provides the Submarine engine in our existing YARN
> > cluster in the next six months.
> > > Because we have a lot of product departments need to use machine
> > learning services,
> > > for example:
> > > 1) Game department (http://game.163.com/) needs AI battle training,
> > > 2) News department (http://www.163.com) needs news recommendation,
> > > 3) Mailbox department (http://www.163.com) requires anti-spam and
> > illegal detection,
> > > 4) Music department (https://music.163.com/) requires music
> > recommendation,
> > > 5) Education department (http://www.youdao.com) requires voice
> > recognition,
> > > 6) Massive Open Online Courses (https://open.163.com/) requires
> > multilingual translation and so on.
> > >
> > > If Submarine can be released independently like Ozone, it will help us
> > quickly get the latest features and improvements, and it will be great
> > helpful to our team and users.
> > >
> > > Thanks hadoop Community!
> > >
> > >
> > > > 在 2019年2月1日,上午2:53,Wangda Tan  写道:
> > > >
> > > > Hi devs,
> > > >
> > > > Since we started submarine-related effort last year, we received a
> lot
> > of
> > > > feedbacks, several companies (such as Netease, China Mobile, etc.)
> are
> > > > trying to deploy Submarine to their Hadoop cluster along with big
> data
> > > > workloads. Linkedin also has big interests to contribute a Submarine
> > TonY (
> > > > https://github.com/linkedin/TonY) runtime to allow users to use the
> > same
> > > > interface.
> > > >
> > > > From what I can see, there're several issues of putting Submarine
> under
> > > > yarn-applications directory and have same release cycle with Hadoop:
> > > >
> > > > 1) We started 3.2.0 release at Sep 2018, but the release is done at
> Jan
> > > > 2019. Because of non-predictable blockers and security issues, it got
> > > > delayed a lot. We need to iterate submarine fast at this point.
> > > >
> > > > 2) We also see a lot of requirements to use Submarine on older Hadoop
> > > > releases such as 2.x. Many companies may not upgrade Hadoop to 3.x
> in a
> > > > short time, but the requirement to run deep learning is urgent to
> > them. We
> > > > should decouple Submarine from Hadoop version.
> > > >
> > > > And why we wanna to keep it within Hadoop? First, Submarine included
> > some
> > > > innovation parts such as enhancements of user experiences for YARN
> > > > services/containerization support which we can add it back to Hadoop
> > later
> > > > to address common requirements. In addition to that, we have a big
> > overlap
> > > > in the community developing and using it.
> > > >
> > > > There're several proposals we have went through during Ozone merge to
> > trunk
> > > > discussion:
> > > >
> >
> https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201803.mbox/%3ccahfhakh6_m3yldf5a2kq8+w-5fbvx5ahfgs-x1vajw8gmnz...@mail.gmail.com%3E
> > > >
> > > > I propose to adopt Ozone model: which is the same master branch,
> > different
> > > > release cycle, and different release branch. It is a great example to
> > show
> > > > agile release we can do (2 Ozone releases after Oct 2018) with less
> > > > overhead to setup CI, projects, etc.
> > > >
> > > > *Links:*
> > > > - JIRA: https://issues.apache.org/jira/browse/YARN-8135
> > > > - Design doc
> > > > <
> >
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit
> > >
> > > > - User doc
> > > > <
> >
> https:/

Re: [VOTE] Merge HADOOP-15407 to trunk

2018-09-19 Thread John Zhuge
+1 (Binding)  Nice effort all around.

On Tue, Sep 18, 2018 at 2:45 AM Steve Loughran 
wrote:

> +1 (Binding)
>
> Ive been testing this; the current branch is rebased to trunk and all the
> new tests are happy.
>
>
> the connector is as good as any of the connectors get before they are
> ready for people to play with: there are always surprises in the wild,
> usually network and configuration —those we have to wait and see what
> happens,
>
>
>
>
>
>
> > On 18 Sep 2018, at 04:10, Sean Mackrory  wrote:
> >
> > All,
> >
> > I would like to propose that HADOOP-15407 be merged to trunk. As
> described
> > in that JIRA, this is a complete reimplementation of the current
> > hadoop-azure storage driver (WASB) with some significant advantages. The
> > impact outside of that module is very limited, however, and it appears
> that
> > on-going improvements will continue to be so. The tests have been stable
> > for some time, and I believe we've reached the point of being ready for
> > broader feedback and to continue incremental improvements in trunk.
> > HADOOP-15407 was rebased on trunk today and I had a successful test run.
> >
> > I'd like to call out the contributions of Thomas Marquardt, Da Zhou, and
> Steve
> > Loughran who have all contributed significantly to getting this branch to
> > its current state. Numerous other developers are named in the commit log
> > and the JIRA.
> >
> > I'll start us off:
> >
> > +1 (binding)
>
>

-- 
John


Re: HADOOP-14163 proposal for new hadoop.apache.org

2018-09-02 Thread John Zhuge
+1 Like the new site.

On Sun, Sep 2, 2018 at 7:02 PM Weiwei Yang  wrote:

> That's really nice, +1.
>
> --
> Weiwei
>
> On Sat, Sep 1, 2018 at 4:36 AM Wangda Tan  wrote:
>
> > +1, thanks for working on this, Marton!
> >
> > Best,
> > Wangda
> >
> > On Fri, Aug 31, 2018 at 11:24 AM Arpit Agarwal  >
> > wrote:
> >
> > > +1
> > >
> > > Thanks for initiating this Marton.
> > >
> > >
> > > On 8/31/18, 1:07 AM, "Elek, Marton"  wrote:
> > >
> > > Bumping this thread at last time.
> > >
> > > I have the following proposal:
> > >
> > > 1. I will request a new git repository hadoop-site.git and import
> the
> > > new site to there (which has exactly the same content as the
> existing
> > > site).
> > >
> > > 2. I will ask infra to use the new repository as the source of
> > > hadoop.apache.org
> > >
> > > 3. I will sync manually all of the changes in the next two months
> > back
> > > to the svn site from the git (release announcements, new
> committers)
> > >
> > > IN CASE OF ANY PROBLEM we can switch back to the svn without any
> > > problem.
> > >
> > > If no-one objects within three days, I'll assume lazy consensus and
> > > start with this plan. Please comment if you have objections.
> > >
> > > Again: it allows immediate fallback at any time as svn repo will be
> > > kept
> > > as is (+ I will keep it up-to-date in the next 2 months)
> > >
> > > Thanks,
> > > Marton
> > >
> > >
> > > On 06/21/2018 09:00 PM, Elek, Marton wrote:
> > > >
> > > > Thank you very much to bump up this thread.
> > > >
> > > >
> > > > About [2]: (Just for the clarification) the content of the
> proposed
> > > > website is exactly the same as the old one.
> > > >
> > > > About [1]. I believe that the "mvn site" is perfect for the
> > > > documentation but for website creation there are more simple and
> > > > powerful tools.
> > > >
> > > > Hugo has more simple compared to jekyll. Just one binary, without
> > > > dependencies, works everywhere (mac, linux, windows)
> > > >
> > > > Hugo has much more powerful compared to "mvn site". Easier to
> > > create/use
> > > > more modern layout/theme, and easier to handle the content (for
> > > example
> > > > new release announcements could be generated as part of the
> release
> > > > process)
> > > >
> > > > I think it's very low risk to try out a new approach for the site
> > > (and
> > > > easy to rollback in case of problems)
> > > >
> > > > Marton
> > > >
> > > > ps: I just updated the patch/preview site with the recent
> releases:
> > > >
> > > > ***
> > > > * http://hadoop.anzix.net *
> > > > ***
> > > >
> > > > On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli wrote:
> > > >> Got pinged about this offline.
> > > >>
> > > >> Thanks for keeping at it, Marton!
> > > >>
> > > >> I think there are two road-blocks here
> > > >>   (1) Is the mechanism using which the website is built good
> > enough
> > > -
> > > >> mvn-site / hugo etc?
> > > >>   (2) Is the new website good enough?
> > > >>
> > > >> For (1), I just think we need more committer attention and get
> > > >> feedback rapidly and get it in.
> > > >>
> > > >> For (2), how about we do it in a different way in the interest
> of
> > > >> progress?
> > > >>   - We create a hadoop.apache.org/new-site/ where this new site
> > > goes.
> > > >>   - We then modify the existing web-site to say that there is a
> > new
> > > >> site/experience that folks can click on a link and navigate to
> > > >>   - As this new website matures and gets feedback & fixes, we
> > > finally
> > > >> pull the plug at a later point of time when we think we are good
> > to
> > > go.
> > > >>
> > > >> Thoughts?
> > > >>
> > > >> +Vinod
> > > >>
> > > >>> On Feb 16, 2018, at 3:10 AM, Elek, Marton 
> > wrote:
> > > >>>
> > > >>> Hi,
> > > >>>
> > > >>> I would like to bump this thread up.
> > > >>>
> > > >>> TLDR; There is a proposed version of a new hadoop site which is
> > > >>> available from here:
> > https://elek.github.io/hadoop-site-proposal/
> > > and
> > > >>> https://issues.apache.org/jira/browse/HADOOP-14163
> > > >>>
> > > >>> Please let me know what you think about it.
> > > >>>
> > > >>>
> > > >>> Longer version:
> > > >>>
> > > >>> This thread started long time ago to use a more modern hadoop
> > site:
> > > >>>
> > > >>> Goals were:
> > > >>>
> > > >>> 1. To make it easier to manage it (the release entries could be
> > > >>> created by a script as part of the release process)
> > > >>> 2. To use a better look-and-feel
> > > >>> 3. Move it out from svn to git
> > > >>>
> > > >>> I proposed to:
> > > >>>
> > > >>> 1. Move 

Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-08 Thread John Zhuge
Thanks Xongjun for the excellent work to drive this release!


+1 (binding)

   - Verified checksums and signatures of tarballs
   - Built source with native, Oracle Java 1.8.0_152 on Mac OS X 10.13.5
   - Verified cloud connectors:
  - ADLS integration tests passed with 1 failure, not a blocker
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - WebHDFS CLI ls and REST LISTSTATUS
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic and servlets
  - Balancer start/stop


ADLS unit test failure:


[ERROR] Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed:
68.889 s <<< FAILURE! - in
org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive

[ERROR]
testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
Time elapsed: 0.851 s  <<< FAILURE!

java.lang.AssertionError: expected:<461> but was:<456>


See https://issues.apache.org/jira/browse/HADOOP-14435. I don't think it is
a blocker.

Thanks,

On Fri, Jun 8, 2018 at 12:04 PM, Xiao Chen  wrote:

> Thanks for the effort on this Yongjun.
>
> +1 (binding)
>
>- Built from src
>- Deployed a pseudo distributed HDFS with KMS
>- Ran basic hdfs commands with encryption
>- Sanity checked webui and logs
>
>
> -Xiao
>
> On Fri, Jun 8, 2018 at 10:34 AM, Brahma Reddy Battula <
> brahmareddy.batt...@hotmail.com> wrote:
>
> > Thanks yongjun zhang for driving this release.
> >
> > +1 (binding).
> >
> >
> > ---Built from the source
> > ---Installed HA cluster
> > ---Execute the basic shell commands
> > ---Browsed the UI's
> > ---Ran sample jobs like pi,wordcount
> >
> >
> >
> > 
> > From: Yongjun Zhang 
> > Sent: Friday, June 8, 2018 1:04 PM
> > To: Allen Wittenauer
> > Cc: Hadoop Common; Hdfs-dev; mapreduce-...@hadoop.apache.org;
> > yarn-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)
> >
> > BTW, thanks Allen and Steve for discussing and suggestion about the site
> > build problem I hit earlier, I did the following step
> >
> > mvn install -DskipTests
> >
> > before doing the steps Nanda listed helped to solve the problems.
> >
> > --Yongjun
> >
> >
> >
> >
> > On Thu, Jun 7, 2018 at 6:15 PM, Yongjun Zhang 
> wrote:
> >
> > > Thank you all very much for the testing, feedback and discussion!
> > >
> > > I was able to build outside docker, by following the steps Nanda
> > > described, I saw the same problem; then I tried 3.0.2 released a while
> > > back, it has the same issue.
> > >
> > > As Allen pointed out, it seems the steps to build site are not
> correct. I
> > > have not figured out the correct steps yet.
> > >
> > > At this point, I think this issue should not block the 3.0.3 issue.
> While
> > > at the same time we need to figure out the right steps to build the
> site.
> > > Would you please let me know if you think differently?
> > >
> > > We only have the site build issue reported so far. And we don't have
> > > enough PMC votes yet. So need some more PMCs to help.
> > >
> > > Thanks again, and best regards,
> > >
> > > --Yongjun
> > >
> > >
> > > On Thu, Jun 7, 2018 at 4:15 PM, Allen Wittenauer <
> > a...@effectivemachines.com
> > > > wrote:
> > >
> > >> > On Jun 7, 2018, at 11:47 AM, Steve Loughran  >
> > >> wrote:
> > >> >
> > >> > Actually, Yongjun has been really good at helping me get set up for
> a
> > >> 2.7.7 release, including "things you need to do to get GPG working in
> > the
> > >> docker image”
> > >>
> > >> *shrugs* I use a different release script after some changes
> > >> broke the in-tree version for building on OS X and I couldn’t get the
> > fixes
> > >> committed upstream.  So not sure what the problems are that you are
> > hitting.
> > >>
> > >> > On Jun 7, 2018, at 1:08 PM, Nandakumar Vadivelu <
> > >> nvadiv...@hortonworks.com> wrote:
> > >> >
> > >> > It will be helpful if we can get the correct steps, and also update
> > the
> > >> wiki.
> > >> > https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+
> > >> Release+Validation
> > >>
> > >> Yup. Looking forward to seeing it.
> > >> -
> > >> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > >> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> > >>
> > >>
> > >
> >
>



-- 
John


[jira] [Resolved] (HADOOP-14961) Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed

2018-02-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14961.
-
Resolution: Duplicate

Fixed by HADOOP-14816.

> Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed
> --
>
> Key: HADOOP-14961
> URL: https://issues.apache.org/jira/browse/HADOOP-14961
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 3.1.0
>    Reporter: John Zhuge
>Priority: Major
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/13546/console 
> {noformat} 
> Downloading Oracle Java 8... 
> --2017-10-18 18:28:11-- 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  
> Resolving download.oracle.com (download.oracle.com)... 
> 23.59.190.131, 23.59.190.130 
> Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... 
> connected. 
> HTTP request sent, awaiting response... 302 Moved Temporarily 
> Location: 
> https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  [following] 
> --2017-10-18 18:28:11-- 
> https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  
> Resolving edelivery.oracle.com (edelivery.oracle.com)... 
> 23.39.16.136, 2600:1409:a:39c::2d3e, 2600:1409:a:39e::2d3e 
> Connecting to edelivery.oracle.com 
> (edelivery.oracle.com)|23.39.16.136|:443... connected. 
> HTTP request sent, awaiting response... 302 Moved 
> Temporarily 
> Location: 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c
>  [following] 
> --2017-10-18 18:28:11-- 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c
>  
> Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... 
> connected. 
> HTTP request sent, awaiting response... 404 Not Found 
> 2017-10-18 18:28:12 ERROR 404: Not Found. 
> download failed 
> Oracle JDK 8 is NOT installed. 
> {noformat}
> Looks like Oracle JDK 8u144 is no longer available for download using that 
> link. 8u151 and 8u152 are available.
> Many of last 10 https://builds.apache.org/job/PreCommit-HADOOP-Build/ jobs 
> failed the same way, all on build host H1 and H6.
> [~aw] has a patch available in HADOOP-14816 "Update Dockerfile to use Xenial" 
> for a long term fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14961) Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed

2018-02-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reopened HADOOP-14961:
-

> Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed
> --
>
> Key: HADOOP-14961
> URL: https://issues.apache.org/jira/browse/HADOOP-14961
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 3.1.0
>    Reporter: John Zhuge
>Priority: Major
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/13546/console 
> {noformat} 
> Downloading Oracle Java 8... 
> --2017-10-18 18:28:11-- 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  
> Resolving download.oracle.com (download.oracle.com)... 
> 23.59.190.131, 23.59.190.130 
> Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... 
> connected. 
> HTTP request sent, awaiting response... 302 Moved Temporarily 
> Location: 
> https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  [following] 
> --2017-10-18 18:28:11-- 
> https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  
> Resolving edelivery.oracle.com (edelivery.oracle.com)... 
> 23.39.16.136, 2600:1409:a:39c::2d3e, 2600:1409:a:39e::2d3e 
> Connecting to edelivery.oracle.com 
> (edelivery.oracle.com)|23.39.16.136|:443... connected. 
> HTTP request sent, awaiting response... 302 Moved 
> Temporarily 
> Location: 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c
>  [following] 
> --2017-10-18 18:28:11-- 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c
>  
> Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... 
> connected. 
> HTTP request sent, awaiting response... 404 Not Found 
> 2017-10-18 18:28:12 ERROR 404: Not Found. 
> download failed 
> Oracle JDK 8 is NOT installed. 
> {noformat}
> Looks like Oracle JDK 8u144 is no longer available for download using that 
> link. 8u151 and 8u152 are available.
> Many of last 10 https://builds.apache.org/job/PreCommit-HADOOP-Build/ jobs 
> failed the same way, all on build host H1 and H6.
> [~aw] has a patch available in HADOOP-14816 "Update Dockerfile to use Xenial" 
> for a long term fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread John Zhuge
Thanks Andrew for the great effort! Here is my late vote.


+1 (binding)

   - Verified checksums and signatures of tarballs
   - Built source with native, Oracle Java 1.8.0_152 on Mac OS X 10.13.2
   - Verified cloud connectors:
  - S3A integration tests (perf tests skipped)
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic
  - Balancer start/stop


On Wed, Dec 13, 2017 at 6:12 PM, Vinod Kumar Vavilapalli  wrote:

> Yes, JIRAs will be filed, the wiki-page idea from YARN meetup is to record
> all combinations of testing that need to be done and correspondingly
> capture all the testing that someone in the community has already done and
> record it for future perusal.
>
> From what you are saying, I guess we haven't advertised to the public yet
> on rolling upgrades, but in our meetups etc so far, you have been saying
> that rolling upgrades is supported - so I assumed we did put it in our
> messaging.
>
> The important question is if we are or are not allowed to make potentially
> incompatible changes to fix bugs in the process of supporting 2.x to 3.x
> upgrades whether rolling or not.
>
> +Vinod
>
> > On Dec 13, 2017, at 1:05 PM, Andrew Wang 
> wrote:
> >
> > I'm hoping we can address YARN-7588 and any remaining rolling upgrade
> issues in 3.0.x maintenance releases. Beyond a wiki page, it would be
> really great to get JIRAs filed and targeted for tracking as soon as
> possible.
> >
> > Vinod, what do you think we need to do regarding caveating rolling
> upgrade support? We haven't advertised rolling upgrade support between
> major releases outside of dev lists and JIRA. As a new major release, our
> compat guidelines allow us to break compatibility, so I don't think it's
> expected by users.
> >
>
>


-- 
John


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-12 Thread John Zhuge
Oops, the vote was meant for 2.7.5. Sorry for the confusion.

 My 2.8.3 vote coming up shortly.

On Tue, Dec 12, 2017 at 4:28 PM, John Zhuge  wrote:

> Thanks Junping for the great effort!
>
>
>- Verified checksums and signatures of all tarballs
>- Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2
>- Verified cloud connectors:
>   - All S3A integration tests
>- Deployed both binary and built source to a pseudo cluster, passed
>the following sanity tests in insecure and SSL mode:
>   - HDFS basic and ACL
>   - DistCp basic
>   - MapReduce wordcount
>   - KMS and HttpFS basic
>   - Balancer start/stop
>
>
> Non-blockers
>
>- HADOOP-13030 Handle special characters in passwords in KMS startup
>script. Fixed in 2.8+.
>- NameNode servlets test failures: 403 User dr.who is unauthorized to
>access this page. Researching. Could be just test configuration issue.
>
> John
>
> On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger 
> wrote:
>
>> Thanks, Junping
>>
>> +1 (non-binding) looks good from my end
>>
>> - Verified all hashes and checksums
>> - Built from source on macOS 10.12.6, Java 1.8.0u65
>> - Deployed a pseudo cluster
>> - Ran some example jobs
>>
>> Eric
>>
>> On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko <
>> shv.had...@gmail.com>
>> wrote:
>>
>> > Downloaded again, now the checksums look good. Sorry my fault
>> >
>> > Thanks,
>> > --Konstantin
>> >
>> > On Mon, Dec 11, 2017 at 5:03 PM, Junping Du 
>> wrote:
>> >
>> > > Hi Konstantin,
>> > >
>> > >  Thanks for verification and comments. I was verifying your
>> example
>> > > below but found it is actually matched:
>> > >
>> > >
>> > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz*
>> > > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) =
>> > > e53d04477b85e8b58ac0a26468f04736*
>> > >
>> > > What's your md5 checksum for given source tar ball?
>> > >
>> > >
>> > > Thanks,
>> > >
>> > >
>> > > Junping
>> > >
>> > >
>> > > --
>> > > *From:* Konstantin Shvachko 
>> > > *Sent:* Saturday, December 9, 2017 11:06 AM
>> > > *To:* Junping Du
>> > > *Cc:* common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
>> > > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
>> > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
>> > >
>> > > Hey Junping,
>> > >
>> > > Could you pls upload mds relative to the tar.gz etc. files rather than
>> > > their full path
>> > > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz:
>> > >MD5 = E5 3D 04 47 7B 85 E8 B5  8A C0 A2 64 68 F0 47 36
>> > >
>> > > Otherwise mds don't match for me.
>> > >
>> > > Thanks,
>> > > --Konstantin
>> > >
>> > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du 
>> wrote:
>> > >
>> > >> Hi all,
>> > >>  I've created the first release candidate (RC0) for Apache Hadoop
>> > >> 2.8.3. This is our next maint release to follow up 2.8.2. It
>> includes 79
>> > >> important fixes and improvements.
>> > >>
>> > >>   The RC artifacts are available at:
>> http://home.apache.org/~junpin
>> > >> g_du/hadoop-2.8.3-RC0
>> > >>
>> > >>   The RC tag in git is: release-2.8.3-RC0
>> > >>
>> > >>   The maven artifacts are available via repository.apache.org
>> at:
>> > >> https://repository.apache.org/content/repositories/orgapache
>> hadoop-1072
>> > >>
>> > >>   Please try the release and vote; the vote will run for the
>> usual 5
>> > >> working days, ending on 12/12/2017 PST time.
>> > >>
>> > >> Thanks,
>> > >>
>> > >> Junping
>> > >>
>> > >
>> > >
>> >
>>
>
>
>
> --
> John
>



-- 
John Zhuge


Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-12 Thread John Zhuge
Thanks Konstantin for the great effort!


+1 (binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2
   - Verified cloud connectors:
  - All S3A integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic
  - Balancer start/stop


Non-blockers

   - HADOOP-13030 Handle special characters in passwords in KMS startup
   script. Fixed in 2.8+.
   - NameNode servlets test failures: 403 User dr.who is unauthorized to
   access this page. Researching. Could be just test configuration issue.


On Tue, Dec 12, 2017 at 2:14 PM, Eric Badger 
wrote:

> Thanks, Konstantin. Everything looks good to me
>
> +1 (non-binding)
>
> - Verified all signatures and digests
> - Built from source on macOS 10.12.6, Java 1.8.0u65
> - Deployed a pseudo cluster
> - Ran some example jobs
>
> Eric
>
> On Tue, Dec 12, 2017 at 11:01 AM, Jason Lowe 
> wrote:
>
> > Thanks for driving the release, Konstantin!
> >
> > +1 (binding)
> >
> > - Verified signatures and digests
> > - Successfully performed a native build from source
> > - Deployed a single-node cluster
> > - Ran some sample jobs and checked the logs
> >
> > Jason
> >
> >
> > On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko <
> shv.had...@gmail.com>
> > wrote:
> >
> > > Hi everybody,
> > >
> > > I updated CHANGES.txt and fixed documentation links.
> > > Also committed  MAPREDUCE-6165, which fixes a consistently failing
> test.
> > >
> > > This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
> > > previous one 2.7.4 was release August 4, 2017.
> > > Release 2.7.5 includes critical bug fixes and optimizations. See more
> > > details in Release Note:
> > > http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html
> > >
> > > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/
> > >
> > > Please give it a try and vote on this thread. The vote will run for 5
> > days
> > > ending 12/13/2017.
> > >
> > > My up to date public key is available from:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > Thanks,
> > > --Konstantin
> > >
> >
>



-- 
John


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-12 Thread John Zhuge
Thanks Junping for the great effort!


   - Verified checksums and signatures of all tarballs
   - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2
   - Verified cloud connectors:
  - All S3A integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic
  - Balancer start/stop


Non-blockers

   - HADOOP-13030 Handle special characters in passwords in KMS startup
   script. Fixed in 2.8+.
   - NameNode servlets test failures: 403 User dr.who is unauthorized to
   access this page. Researching. Could be just test configuration issue.

John

On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger 
wrote:

> Thanks, Junping
>
> +1 (non-binding) looks good from my end
>
> - Verified all hashes and checksums
> - Built from source on macOS 10.12.6, Java 1.8.0u65
> - Deployed a pseudo cluster
> - Ran some example jobs
>
> Eric
>
> On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko <
> shv.had...@gmail.com>
> wrote:
>
> > Downloaded again, now the checksums look good. Sorry my fault
> >
> > Thanks,
> > --Konstantin
> >
> > On Mon, Dec 11, 2017 at 5:03 PM, Junping Du  wrote:
> >
> > > Hi Konstantin,
> > >
> > >  Thanks for verification and comments. I was verifying your example
> > > below but found it is actually matched:
> > >
> > >
> > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz*
> > > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) =
> > > e53d04477b85e8b58ac0a26468f04736*
> > >
> > > What's your md5 checksum for given source tar ball?
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Junping
> > >
> > >
> > > --
> > > *From:* Konstantin Shvachko 
> > > *Sent:* Saturday, December 9, 2017 11:06 AM
> > > *To:* Junping Du
> > > *Cc:* common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
> > >
> > > Hey Junping,
> > >
> > > Could you pls upload mds relative to the tar.gz etc. files rather than
> > > their full path
> > > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz:
> > >MD5 = E5 3D 04 47 7B 85 E8 B5  8A C0 A2 64 68 F0 47 36
> > >
> > > Otherwise mds don't match for me.
> > >
> > > Thanks,
> > > --Konstantin
> > >
> > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du 
> wrote:
> > >
> > >> Hi all,
> > >>  I've created the first release candidate (RC0) for Apache Hadoop
> > >> 2.8.3. This is our next maint release to follow up 2.8.2. It includes
> 79
> > >> important fixes and improvements.
> > >>
> > >>   The RC artifacts are available at:
> http://home.apache.org/~junpin
> > >> g_du/hadoop-2.8.3-RC0
> > >>
> > >>   The RC tag in git is: release-2.8.3-RC0
> > >>
> > >>   The maven artifacts are available via repository.apache.org at:
> > >> https://repository.apache.org/content/repositories/
> orgapachehadoop-1072
> > >>
> > >>   Please try the release and vote; the vote will run for the
> usual 5
> > >> working days, ending on 12/12/2017 PST time.
> > >>
> > >> Thanks,
> > >>
> > >> Junping
> > >>
> > >
> > >
> >
>



-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-16 Thread John Zhuge
+1 (binding)

   - Verified checksums of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Passed all S3A and ADL integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (skipped in SSL+Kerberos mode)
  - KMS and HttpFS basic
  - Balancer start/stop


On Tue, Nov 14, 2017 at 1:34 PM, Andrew Wang 
wrote:

> Hi folks,
>
> Thanks as always to the many, many contributors who helped with this
> release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
> available here:
>
> http://people.apache.org/~wang/3.0.0-RC0/
>
> This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.
>
> 3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
> additions include the merge of YARN resource types, API-based configuration
> of the CapacityScheduler, and HDFS router-based federation.
>
> I've done my traditional testing with a pseudo cluster and a Pi job. My +1
> to start.
>
> Best,
> Andrew
>



-- 
John


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC1)

2017-11-09 Thread John Zhuge
+1 (binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Verified these cloud connectors:
  - All S3A integration tests
  - All ADL live unit tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (skipped in SSL+Kerberos mode)
  - KMS and HttpFS basic
  - Balancer start/stop


On Thu, Nov 9, 2017 at 5:39 PM, Subru Krishnan  wrote:

> Hi Folks,
>
> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
> starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
> 2.8.2 .
>
> More information about the 2.9.0 release plan can be found here:
> *https://cwiki.apache.org/confluence/display/HADOOP/
> Roadmap#Roadmap-Version2.9
>  Roadmap#Roadmap-Version2.9>*
>
> New RC is available at: http://home.apache.org/~asuresh/hadoop-2.9.0-RC1/
>  2F~asuresh%2Fhadoop-2.9.0-RC1%2F&sa=D&sntz=1&usg=
> AFQjCNE7BF35IDIMZID3hPqiNglWEVsTpg>
>
> The RC tag in git is: release-2.9.0-RC1, and the latest commit id is:
> 7d2ba3e8dd74d2631c51ce6790d59e50eeb7a846.
>
> The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1066
>  apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066&sa=D&
> sntz=1&usg=AFQjCNFcern4uingMV_sEreko_zeLlgdlg>
>
> Please try the release and vote; the vote will run for the usual 5 days,
> ending on Tuesday 14th November 2017 6pm PST time.
>
> Thanks,
> -Subru/Arun
>



-- 
John


[jira] [Resolved] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities

2017-11-09 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-15012.
-
   Resolution: Fixed
Fix Version/s: 3.1.0

Committed to trunk together with HADOOP-14872. Code review was done there.
{noformat}
6c32ddad302 HADOOP-14872. CryptoInputStream should implement unbuffer. 
Contributed by John Zhuge.
bf6a660232b HADOOP-15012. Add readahead, dropbehind, and unbuffer to 
StreamCapabilities. Contributed by John Zhuge.
{noformat}


> Add readahead, dropbehind, and unbuffer to StreamCapabilities
> -
>
> Key: HADOOP-15012
> URL: https://issues.apache.org/jira/browse/HADOOP-15012
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.9.0
>    Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.1.0
>
>
> A split from HADOOP-14872 to track changes that enhance StreamCapabilities 
> class with READAHEAD, DROPBEHIND, and UNBUFFER capability.
> Discussions and code reviews are done in HADOOP-14872.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15012) Enhance StreamCapabilities with READAHEAD, DROPBEHIND, and UNBUFFER

2017-11-02 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-15012:
---

 Summary: Enhance StreamCapabilities with READAHEAD, DROPBEHIND, 
and UNBUFFER
 Key: HADOOP-15012
 URL: https://issues.apache.org/jira/browse/HADOOP-15012
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.9.0
Reporter: John Zhuge
Priority: Major


A split from HADOOP-14872 to track changes that enhance StreamCapabilities 
class with READAHEAD, DROPBEHIND, and UNBUFFER capability.

Discussions and code reviews are done in HADOOP-14872.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14974) org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation fails in trunk

2017-10-23 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14974.
-
  Resolution: Fixed
   Fix Version/s: 3.1.0
Target Version/s: 3.1.0

> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation
>  fails in trunk
> ---
>
> Key: HADOOP-14974
> URL: https://issues.apache.org/jira/browse/HADOOP-14974
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>    Assignee: John Zhuge
>Priority: Blocker
> Fix For: 3.1.0
>
>
> {code}
> org.apache.hadoop.metrics2.MetricsException: Metrics source 
> QueueMetrics,q0=root already exists!
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
>   at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:239)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueMetrics.forQueue(CSQueueMetrics.java:141)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.(AbstractCSQueue.java:131)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.(ParentQueue.java:90)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:267)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.initializeQueues(CapacitySchedulerQueueManager.java:158)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:639)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:331)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:391)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:756)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1152)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:317)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.serviceInit(MockRM.java:1313)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:161)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:140)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:136)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testExcessReservationThanNodeManagerCapacity(TestContainerAllocation.java:90)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-23 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reopened HADOOP-14954:
-

Reverted because it broke a bunch of YARN tests, e.g., TestContainerAllocation.

> MetricsSystemImpl#init should increment refCount when already initialized
> -
>
> Key: HADOOP-14954
> URL: https://issues.apache.org/jira/browse/HADOOP-14954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.0
>    Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14954.001.patch, HADOOP-14954.002.patch, 
> HADOOP-14954.002a.patch
>
>
> {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
> {{shutdown}}.
> {code:java}
>   public synchronized MetricsSystem init(String prefix) {
> if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) {
>   LOG.warn(this.prefix +" metrics system already initialized!");
>   return this;
> }
> this.prefix = checkNotNull(prefix, "prefix");
> ++refCount;
> {code}
> Move {{++refCount}}  to the beginning of this method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-23 Thread John Zhuge
 +1 (binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Verified cloud connectors:
  - All S3A integration tests
  - All ADL live unit tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (skipped in SSL+Kerberos mode)
  - KMS and HttpFS basic
  - Balancer start/stop


On Mon, Oct 23, 2017 at 9:03 AM, Ajay Kumar 
wrote:

> Thanks, Junping Du!
>
> +1 (non-binding)
>
> - Built from source
> - Ran hdfs commands
> - Ran pi and sample MR test.
> - Verified the UI's
>
> Thanks,
> Ajay Kumar
>
> On 10/23/17, 8:14 AM, "Shane Kumpf"  wrote:
>
> Thanks, Junping!
>
> +1 (non-binding)
>
> - Verified checksums and signatures
> - Deployed a single node cluster on CentOS 7.2 using the binary tgz,
> source
> tgz, and git tag
> - Ran hdfs commands
> - Ran pi and distributed shell using the default and docker runtimes
> - Verified Docker docs
> - Verified docker runtime can be disabled
> - Verified the UI's
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


-- 
John


[jira] [Created] (HADOOP-14963) Add HTrace to ADLS connector

2017-10-18 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14963:
---

 Summary: Add HTrace to ADLS connector
 Key: HADOOP-14963
 URL: https://issues.apache.org/jira/browse/HADOOP-14963
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge


Add Apache HTrace support to Hadoop ADLS connector in order to support 
distributed tracing.

Make sure the connector and the ADLS SDK support B3 Propagation so that 
tracer/span IDs are sent via HTTP request to ADLS backend.

To build an entire distributed tracing solution for ADLS, we will also need 
these components:
* ADLS backend should support one of the Tracers. See 
http://opentracing.io/documentation/pages/supported-tracers.html.
* Zipkin Collector: Event Hub
* Zipkin Storage: MySQL




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14961) Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed

2017-10-18 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14961:
---

 Summary: Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 
is NOT installed
 Key: HADOOP-14961
 URL: https://issues.apache.org/jira/browse/HADOOP-14961
 Project: Hadoop Common
  Issue Type: Bug
Reporter: John Zhuge


https://builds.apache.org/job/PreCommit-HADOOP-Build/13546/console 
{noformat} 
Downloading Oracle Java 8... 
--2017-10-18 18:28:11-- 
http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
 
Resolving download.oracle.com (download.oracle.com)... 
23.59.190.131, 23.59.190.130 
Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... 
connected. 
HTTP request sent, awaiting response... 302 Moved Temporarily 
Location: 
https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
 [following] 
--2017-10-18 18:28:11-- 
https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
 
Resolving edelivery.oracle.com (edelivery.oracle.com)... 
23.39.16.136, 2600:1409:a:39c::2d3e, 2600:1409:a:39e::2d3e 
Connecting to edelivery.oracle.com (edelivery.oracle.com)|23.39.16.136|:443... 
connected. 
HTTP request sent, awaiting response... 302 Moved Temporarily 
Location: 
http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c
 [following] 
--2017-10-18 18:28:11-- 
http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c
 
Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... 
connected. 
HTTP request sent, awaiting response... 404 Not Found 
2017-10-18 18:28:12 ERROR 404: Not Found. 

download failed 
Oracle JDK 8 is NOT installed. 
{noformat}

Looks like Oracle JDK 8u144 is no longer available for download using that 
link. 8u151 and 8u152 are available.

Many of last 10 https://builds.apache.org/job/PreCommit-HADOOP-Build/ jobs 
failed the same way, all on build host H1 and H6.

[~aw] has a patch available in HADOOP-14816 "Update Dockerfile to use Xenial" 
for a long term fix.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized

2017-10-16 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14954:
---

 Summary: MetricsSystemImpl#init should increment refCount when 
already initialized
 Key: HADOOP-14954
 URL: https://issues.apache.org/jira/browse/HADOOP-14954
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.7.0
Reporter: John Zhuge
Priority: Minor


{{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in 
{{shutdown}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-beta1 RC0

2017-10-03 Thread John Zhuge
+1 (binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Verified cloud connectors:
  - All S3A integration tests
  - All ADL live unit tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (only failed in SSL+Kerberos mode for binary
  tarball, probably unrelated)
  - KMS and HttpFS basic
  - Balancer start/stop

Hit the following errors but they don't seem to be blocking:

== Missing dependencies during build ==

> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-azure has missing dependencies: jetty-util-ajax-9.3.19.
> v20170502.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar


Filed HADOOP-14923, HADOOP-14924, and HADOOP-14925.

== Unit tests failed in Kerberos+SSL mode for KMS and HttpFs default HTTP
servlet /conf, /stacks, and /logLevel ==

One example below:

>Connecting to
> https://localhost:14000/logLevel?log=org.apache.hadoop.fs.http.server.HttpFSServer
>Exception in thread "main"
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Authentication failed, URL:
> https://localhost:14000/logLevel?log=org.apache.hadoop.fs.http.server.HttpFSServer&user.name=jzhuge,
> status: 403, message: GSSException: Failure unspecified at GSS-API level
> (Mechanism level: Request is a replay (34))


The /logLevel failure will affect command "hadoop daemonlog".


On Tue, Oct 3, 2017 at 10:56 AM, Andrew Wang 
wrote:

> Thanks for all the votes thus far! We've gotten the binding +1's to close
> the release, though are there contributors who could kick the tires on
> S3Guard and YARN TSv2 alpha2? These are the two new features merged since
> alpha4, so it'd be good to get some coverage.
>
>
>
> On Tue, Oct 3, 2017 at 9:45 AM, Brahma Reddy Battula 
> wrote:
>
> >
> > Thanks Andrew.
> >
> > +1 (non binding)
> >
> > --Built from source
> > --installed 3 node HA cluster
> > --Verified shell commands and UI
> > --Ran wordcount/pic jobs
> >
> >
> >
> >
> > On Fri, 29 Sep 2017 at 5:34 AM, Andrew Wang 
> > wrote:
> >
> >> Hi all,
> >>
> >> Let me start, as always, by thanking the many, many contributors who
> >> helped
> >> with this release! I've prepared an RC0 for 3.0.0-beta1:
> >>
> >> http://home.apache.org/~wang/3.0.0-beta1-RC0/
> >>
> >> This vote will run five days, ending on Nov 3rd at 5PM Pacific.
> >>
> >> beta1 contains 576 fixed JIRA issues comprising a number of bug fixes,
> >> improvements, and feature enhancements. Notable additions include the
> >> addition of YARN Timeline Service v2 alpha2, S3Guard, completion of the
> >> shaded client, and HDFS erasure coding pluggable policy support.
> >>
> >> I've done the traditional testing of running a Pi job on a pseudo
> cluster.
> >> My +1 to start.
> >>
> >> We're working internally on getting this run through our integration
> test
> >> rig. I'm hoping Vijay or Ray can ring in with a +1 once that's complete.
> >>
> >> Best,
> >> Andrew
> >>
> > --
> >
> >
> >
> > --Brahma Reddy Battula
> >
>



-- 
John


[jira] [Created] (HADOOP-14925) hadoop-aliyun has missing dependencies

2017-10-03 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14925:
---

 Summary: hadoop-aliyun has missing dependencies
 Key: HADOOP-14925
 URL: https://issues.apache.org/jira/browse/HADOOP-14925
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/oss
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Priority: Minor


Saw these errors uncovered by dist-tools-hooks-maker during build:
{noformat}
ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14924) hadoop-azure-datalake has missing dependencies

2017-10-03 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14924:
---

 Summary: hadoop-azure-datalake has missing dependencies
 Key: HADOOP-14924
 URL: https://issues.apache.org/jira/browse/HADOOP-14924
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Priority: Minor


Saw these errors uncovered by dist-tools-hooks-maker during build:
{noformat}
ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14923) hadoop-azure has missing dependencies

2017-10-03 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14923:
---

 Summary: hadoop-azure has missing dependencies
 Key: HADOOP-14923
 URL: https://issues.apache.org/jira/browse/HADOOP-14923
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Priority: Minor


Saw these errors uncovered by dist-tools-hooks-maker during build:
{noformat}
ERROR: hadoop-azure has missing dependencies: 
jetty-util-ajax-9.3.19.v20170502.jar
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14917) AdlFileSystem should support getStorageStatistics

2017-09-29 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14917:
---

 Summary: AdlFileSystem should support getStorageStatistics
 Key: HADOOP-14917
 URL: https://issues.apache.org/jira/browse/HADOOP-14917
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge


AdlFileSystem should support the storage statistics introduced by HADOOP-13065, 
so any execution framework gathering the statistics can include them, and tests 
can log them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14872) CryptoInputStream should implement unbuffer

2017-09-15 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14872:
---

 Summary: CryptoInputStream should implement unbuffer
 Key: HADOOP-14872
 URL: https://issues.apache.org/jira/browse/HADOOP-14872
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.6.4
Reporter: John Zhuge


Discovered in IMPALA-5909.

CryptoInputStream extending FSDataInputStream should implement unbuffer method
* Release buffer and cache when instructed
* Avoid calling super unbuffer method that throws UOE. Applications may not 
handle the UOE very well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name

2017-09-12 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14864:
---

 Summary: FSDataInputStream#unbuffer UOE exception should print the 
stream class name
 Key: HADOOP-14864
 URL: https://issues.apache.org/jira/browse/HADOOP-14864
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.6.4
Reporter: John Zhuge
Priority: Minor


The current exception message:
{noformat}
org/apache/hadoop/fs/ failed: error:
UnsupportedOperationException: this stream does not support 
unbuffering.java.lang.UnsupportedOperationException: this stream does not 
support unbuffering.
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14862) Metrics for AdlFileSystem

2017-09-12 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14862:
---

 Summary: Metrics for AdlFileSystem
 Key: HADOOP-14862
 URL: https://issues.apache.org/jira/browse/HADOOP-14862
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge


Add a Metrics2 source {{AdlFileSystemInstrumentation}} for {{AdlFileSystem}}.

Consider per-thread statistics data if possible. Atomic variables are not 
totally free in multi-core arch. Don't think Java can do per-cpu data structure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14832) listing s3a bucket without credentials gives Interrupted error

2017-09-04 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14832:
---

 Summary: listing s3a bucket without credentials gives Interrupted 
error
 Key: HADOOP-14832
 URL: https://issues.apache.org/jira/browse/HADOOP-14832
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Priority: Minor


In trunk pseudo distributed mode, without setting s3a credentials, listing an 
s3a bucket only gives "Interrupted" error :
{noformat}
$ hadoop fs -ls s3a://bucket/
ls: Interrupted
{noformat}

In comparison, branch-2 gives a much better error message:
{noformat}
(branch-2)$ hadoop_env hadoop fs -ls s3a://bucket/
ls: doesBucketExist on hdfs-cce: com.amazonaws.AmazonClientException: No AWS 
Credentials provided by BasicAWSCredentialsProvider 
EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : 
com.amazonaws.SdkClientException: Unable to load credentials from service 
endpoint
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: native folder not found in hadoop-common build on Ubuntu

2017-08-31 Thread John Zhuge
Hi Ping,

Thanks for using Hadoop. Linux is Unix-like. Hadoop supports native code on
Linux. Please read BUILDING.txt in the root of the Hadoop source tree.

Could you provide the entire Maven command line when you built Hadoop?

On Thu, Aug 31, 2017 at 3:06 PM, Ping Liu  wrote:

> I built hadoop-common on Ubuntu in my VirtualBox.  But in target folder, I
> didn't find "native" folder that is supposed to contain the generated JNI
> header files for C.  On my Windows, native folder is found in target.
>
> As I check the POM file, I found "native build only supported on Mac or
> Unix".  Does this mean native is not supported on Linux?
>
> Thanks!
>
> Ping
>



-- 
John


[jira] [Created] (HADOOP-14808) Hadoop keychain

2017-08-25 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14808:
---

 Summary: Hadoop keychain
 Key: HADOOP-14808
 URL: https://issues.apache.org/jira/browse/HADOOP-14808
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: John Zhuge


Extend the idea from HADOOP-6520 "UGI should load tokens from the environment" 
to a generic lightweight "keychain" design. Load keys (secrets) into a keychain 
in UGI (secret map) at startup. YARN will distribute them securely into each 
container. The Hadoop code running in the container can then retrieve the 
credentials from UGI.

The use case is Bring Your Own Key (BYOK) credentials for cloud connectors 
(adl, wasb, s3a, etc.), while Hadoop authentication is still Kerberos. No 
configuration change, no admin involved. It will support YARN applications 
initially, e.g., DistCp, Tera Suite, Spark-on-Yarn, etc.

Implementation is surprisingly simple because almost all pieces are in place:
* Retrieve secrets from UGI using {{conf.getPassword}} backed by the existing 
Credential Provider class {{UserProvider}}
* Reuse Credential Provider classes and interface to define local permanent or 
transient credential store, e.g., LocalJavaKeyStoreProvider
* New: create a new transient Credential Provider that logs into AAD with 
username/password or device code, and then put the Client ID and Refresh Token 
into the keychain
* New: create a new permanent Credential Provider based on Hadoop configuration 
XML, for dev/testing purpose.

Links
* HADOOP-11766 Generic token authentication support for Hadoop
* HADOOP-11744 Support OAuth2 in Hadoop
* HADOOP-10959 A Kerberos based token authentication approach
* HADOOP-9392 Token based authentication and Single Sign On



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14794) Standalone MiniKdc server

2017-08-21 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14794:
---

 Summary: Standalone MiniKdc server
 Key: HADOOP-14794
 URL: https://issues.apache.org/jira/browse/HADOOP-14794
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Affects Versions: 2.7.0
Reporter: John Zhuge
Assignee: John Zhuge


Add a new subcommand {{hadoop minikdc}} to start a standalone MiniKdc server. 
This will make it easier to test Kerberos in pseudo-distributed mode without an 
external KDC server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14791) SimpleKdcServer: Fail to delete krb5 conf

2017-08-19 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14791:
---

 Summary: SimpleKdcServer: Fail to delete krb5 conf
 Key: HADOOP-14791
 URL: https://issues.apache.org/jira/browse/HADOOP-14791
 Project: Hadoop Common
  Issue Type: Bug
  Components: minikdc
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


Run MiniKdc in a terminal and then press Ctrl-C:
{noformat}
Do  or kill  to stop it
---

^C2017-08-19 22:52:23,607 INFO impl.DefaultInternalKdcServerImpl: Default 
Internal kdc server stopped.
2017-08-19 22:53:21,358 INFO server.SimpleKdcServer: Fail to delete krb5 conf. 
java.io.IOException
2017-08-19 22:53:22,363 INFO minikdc.MiniKdc: MiniKdc stopped.
{noformat}

The reason for "Fail to delete krb5 conf" is because MiniKdc renames 
SimpleKdcServer's krb5 conf file. During shutdown, SimpleKdcServer attempts to 
delete its krb5 conf file, and can not find it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge HADOOP-13345 (S3Guard feature branch)

2017-08-18 Thread John Zhuge
That will be great. Please record it if possible.

On Fri, Aug 18, 2017 at 4:12 AM, Steve Loughran 
wrote:

>
> I can do a demo of this next week if people are interested
>
> > On 17 Aug 2017, at 23:07, Aaron Fabbri  wrote:
> >
> > Hello,
> >
> > I'd like to open a vote (7 days, ending August 24 at 3:10 PST) to merge
> the
> > HADOOP-13345 feature branch into trunk.
> >
> > This branch contains the new S3Guard feature which adds metadata
> > consistency features to the S3A client.  Formatted site documentation can
> > be found here:
> >
> > https://github.com/apache/hadoop/blob/HADOOP-13345/
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
> >
> > The current patch against trunk is posted here:
> >
> > https://issues.apache.org/jira/browse/HADOOP-13998
> >
> > The branch modifies the s3a portion of the hadoop-tools/hadoop-aws
> module:
> >
> > - The feature is off by default, and care has been taken to insure it has
> > no impact when disabled.
> > - S3Guard can be enabled with the production database which is backed by
> > DynamoDB, or with a local, in-memory implementation that facilitates
> > integration testing without having to pay for a database.
> > - getFileStatus() as well as directory listing consistency has been
> > implemented and thoroughly tested, including delete tracking.
> > - Convenient Maven profiles for testing with and without S3Guard.
> > - New failure injection code and integration tests that exercise it.  We
> > use timers and a wrapper around the Amazon SDK client object to force
> > consistency delays to occur.  This allows us to assert that S3Guard works
> > as advertised.  This will be extended with more types of failure
> injection
> > to continue hardening the S3A client.
> >
> > Outside of hadoop-tools/hadoop-aws's s3a directory there are some minor
> > changes:
> >
> > - core-default.xml defaults and documentation for s3guard parameters.
> > - A couple additional FS contract test cases around rename.
> > - More goodies in LambdaTestUtils
> > - A new CLI tool for inspecting and manipulating S3Guard features,
> > including the backing MetadataStore database.
> >
> > This branch has seen extensive testing as well as use in production.
> This
> > branch makes significant improvements to S3A's test toolkit as well.
> >
> > Performance is typically on par with, and in some cases better than, the
> > existing S3A code without S3Guard enabled.
> >
> > This feature was developed with contributions and feedback from many
> > people.  I'd like to thank everyone who worked on HADOOP-13345 as well as
> > all of those who contributed feedback and work on the original design
> > document.
> >
> > This is the first major Apache Hadoop project I've worked on from start
> to
> > finish, and I've really enjoyed it.  Please shout if I've missed anything
> > important here or in the VOTE process.
> >
> > Cheers,
> > Aaron Fabbri
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


-- 
John


[jira] [Created] (HADOOP-14786) HTTP default servlets do not require authentication when kerberos is enabled

2017-08-17 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14786:
---

 Summary: HTTP default servlets do not require authentication when 
kerberos is enabled
 Key: HADOOP-14786
 URL: https://issues.apache.org/jira/browse/HADOOP-14786
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


The default HttpServer2 servlet /jmx, /conf, /logLevel, and /stack do not 
require authentication when Kerberos is enabled.


{code:java|title=HttpServer2#addDefaultServlets}
  // set up default servlets
  addServlet("stacks", "/stacks", StackServlet.class);
  addServlet("logLevel", "/logLevel", LogLevel.Servlet.class);
  addServlet("jmx", "/jmx", JMXJsonServlet.class);
  addServlet("conf", "/conf", ConfServlet.class);
{code}

{code:java|title=HttpServer2#addServlet}
public void addServlet(String name, String pathSpec,
   Class clazz) {
  addInternalServlet(name, pathSpec, clazz, false);
  addFilterPathMapping(pathSpec, webAppContext);
{code}
{code:java|title=Httpserver2#addInternalServlet}
addInternalServlet(…, bool requireAuth)
…
if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
  LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
{code}

{{requireAuth}} is {{false}} for the default servlets inside 
{{addInternalServlet}}.

The issue can be verified by running the following curl command against 
NameNode web address when Kerberos is enabled:
{noformat}
kdestroy
curl --negotiate -u: -k -sS 'https://:9871/jmx'
{noformat}
Expect curl to fail, but it returns JMX anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14438) Make ADLS doc of setting up client key up to date

2017-08-11 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14438.
-
Resolution: Duplicate
  Assignee: John Zhuge

Take care of both issues in HADOOP-14627.

> Make ADLS doc of setting up client key up to date
> -
>
> Key: HADOOP-14438
> URL: https://issues.apache.org/jira/browse/HADOOP-14438
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Reporter: Mingliang Liu
>    Assignee: John Zhuge
>
> In the doc {{hadoop-tools/hadoop-azure-datalake/src/site/markdown/index.md}}, 
> we have such a statement:
> {code:title=Note down the properties you will need to auth}
> ...
> - Resource: Always https://management.core.windows.net/ , for all customers
> {code}
> Is the {{Resource}} useful here? It seems not necessary to me.
> {code:title=Adding the service principal to your ADL Account}
> - ...
> - Select Users under Settings
> ...
> {code}
> According to the portal, it should be "Access control (IAM)" under "Settings"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14765) AdlFsInputStream to implement CanUnbuffer

2017-08-11 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14765:
---

 Summary: AdlFsInputStream to implement CanUnbuffer
 Key: HADOOP-14765
 URL: https://issues.apache.org/jira/browse/HADOOP-14765
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Priority: Minor


HBase relies on FileSystems implementing CanUnbuffer.unbuffer() to force input 
streams to free up remote connections (HBASE-9393Link). This works for HDFS, 
but not elsewhere.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14764) Über-jira adl:// Azure Data Lake Phase II: Performance and Testing

2017-08-11 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14764:
---

 Summary: Über-jira adl:// Azure Data Lake Phase II: Performance 
and Testing
 Key: HADOOP-14764
 URL: https://issues.apache.org/jira/browse/HADOOP-14764
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge


Uber JIRA to track things needed for Azure Data Lake to be considered stable 
and adl:// ready for wide use.

Based on the experience with other object stores, the things which usually 
surface once a stabilizing FS is picked up and used are

* handling of many GB files, up and down, be it: efficiency of read, when the 
writes take place, file leakage, time for close() and filesystem shutdown
* resilience to transient failures
* reporting of problems/diagnostics
* security option tuning
* race conditions
* differences between implementation and what actual applications expect



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14754) TestCommonConfigurationFields failed: core-default.xml has 2 properties missing in class

2017-08-10 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14754:
---

 Summary: TestCommonConfigurationFields failed: core-default.xml 
has 2 properties missing in class
 Key: HADOOP-14754
 URL: https://issues.apache.org/jira/browse/HADOOP-14754
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, fs/azure
Affects Versions: 2.9.0, 3.0.0-beta1
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


https://builds.apache.org/job/PreCommit-HADOOP-Build/13004/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt:
{noformat}
core-default.xml has 2 properties missing in  class 
org.apache.hadoop.fs.CommonConfigurationKeys  class 
org.apache.hadoop.fs.CommonConfigurationKeysPublic  class 
org.apache.hadoop.fs.local.LocalConfigKeys  class 
org.apache.hadoop.fs.ftp.FtpConfigKeys  class 
org.apache.hadoop.ha.SshFenceByTcpPort  class 
org.apache.hadoop.security.LdapGroupsMapping  class 
org.apache.hadoop.ha.ZKFailoverController  class 
org.apache.hadoop.security.ssl.SSLFactory  class 
org.apache.hadoop.security.CompositeGroupsMapping  class 
org.apache.hadoop.io.erasurecode.CodecUtil
{noformat}

Unfortunately, it does not show which 2 properties missing. Ran test manually 
got:
{noformat}
  fs.wasbs.impl
  fs.wasb.impl
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14753) Add WASB FileContext tests

2017-08-09 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14753:
---

 Summary: Add WASB FileContext tests
 Key: HADOOP-14753
 URL: https://issues.apache.org/jira/browse/HADOOP-14753
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure, test
Affects Versions: 2.8.0
Reporter: John Zhuge
Priority: Minor


Add FileContext contract tests for WASB. See ITestS3AFileContextURI and friends 
for example.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Question about how to best contribute

2017-08-07 Thread John Zhuge
And check out HADOOP-12145
 Organize and update
CodeReviewChecklist wiki.

Thanks, your contribution will be greatly appreciated!


On Mon, Aug 7, 2017 at 5:53 AM, Steve Loughran 
wrote:

>
> Hi Lars & Welcome!
>
> Maybe the first step here would be look at those style guides and think
> how to bring them up to date, especially with stuff like lambda-expressions
> in java 8, and mnodules forthcoming in in java 9, SLF4J logging, Junit 5 ->
> 5 testing, code instrumentation, diagnostics, log stability, etc.
>
> https://issues.apache.org/jira/browse/HADOOP-12143 . ;
>
> This is my go at doing this
>
> https://github.com/steveloughran/formality/blob/
> master/styleguide/styleguide.md
>
>
> I've not done any work on trying to get it in, more evolving it as how I
> code & what I look for, especially in tests.
>
> If you want to take this on, it'd be nice. At the same time, I fear
> there'd be push back if you turned up and started telling people what to
> do. Collaborating with us all on the test code is a good place to start.
>
> We're also more relaxed about contributions to the less-core bits of the
> system (things like HDFS, IPC, security and Yarn core are trouble). If
> there's stuff outside that you want to take a go at helping clean up,
> that'd be lower risk (example: object store connectors)
>
> -Steve
>
>
>
> On 7 Aug 2017, at 13:13, Lars Francke  lars.fran...@gmail.com>> wrote:
>
> Hi,
>
> a few words about me: I've contributed to Hadoop (and it's ecosystem[4]) in
> the past am a Hive committer and have used Hadoop for 10 years now, so I'm
> not totally inexperienced. I'm earning my money as a Hadoop consultant so
> I've seen dozens of real-life clusters in my life.
>
> As part of a few recent client projects and now writing about Hadoop in a
> new project/book I'm digging into the source code to figure out some of the
> things that are not documented.
>
> But as part of this digging I'm seeing lots of warnings in the code,
> inconsistencies etc. and I'd like to contribute some fixes to this back to
> the community.
>
> I have been a long-time believer in good code quality and consistent code
> styles. This might affect people like me especially who do a lot of
> "drive-by" contributions as I'm not someone who looks at the code daily but
> comes across it reasonably often as part of client engagements. In those
> scenarios, it's very unhelpful to have inconsistent code & bad
> documentation.
>
> Two simple but concrete examples:
> * There's lots of "final" usages on variables and methods but no
> consistency. Was this done for particular reasons or personal preference?
>
> personal, though with a move to l-expressions, it matters a lot more. We
> should really be marking all parameters as final at the very least.
>
>
> * Similarly, there's lots of things that are public or protected while they
> could in theory be private. This especially makes it very hard to reason
> about code.
>
> there's now a bit of fear of breaking things, but at the very least,
> things could be protected or package-private more than they are.
>
>
>
> Judging from the current code there's lots of "unofficial" code styling
> and/or personal preference. The Wiki says[1] to follow the Sun
> guidelines[2] which have not been updated in almost 20 years. A new version
> is in the works an clarifies a lot of things[3]. I'm trying to get it
> published soon. I'd try to format according to the latter (that means among
> other things no "final" for local variables).
>
> I realize that I won't be able to single-handedly fix all of this
> especially as code gets contributed but if the community thinks it's
> worthwhile I'd still love to land a few cleanup patches. My experience in
> the past has been that it's hard to get attention to these things (which I
> fully understand as they take up someone's time to review & commit).
>
> So, this is my request for comments on these questions:
> * Is there any interest in this at all?
> ** "This" being patches for code style & things like FindBugs & Checkstyle
> warnings
> * Size of the patches: Rather one big patch or smaller ones (e.g. per file
> or package)
> * Anyone willing to help me with this? e.g. reviewing and committing? I'd
> be more than happy to bribe you with drinks, sweets, food or something else
>
> My plan is not to go through each and every file and fix every issue I see.
> But there are some specific areas I'm looking at in detail and there I'd
> love to contribute back.
>
> Thank you for reading!
>
> Cheers,
> Lars
>
> PS: Posting to common-dev only, not sure if I should cross post to hdfs-dev
> and yarn-dev as well?
>
> [1] 
> [2] <
> http://www.oracle.com/technetwork/java/javase/documentation/codeconvtoc-
> 136057.html
>
> [3] 
> [4] <
> https://issues.apache.org/jira/issues/?filter=-1&jql=
> reporter%20%3D%20

[jira] [Created] (HADOOP-14737) Sort out hadoop-common contract-test-options.xml

2017-08-04 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14737:
---

 Summary: Sort out hadoop-common contract-test-options.xml
 Key: HADOOP-14737
 URL: https://issues.apache.org/jira/browse/HADOOP-14737
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, fs, test
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


Follow up to HADOOP-14103. Update hadoop-common testing.md in a similar fashion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14721) Add StreamCapabilities support to Aliyun OSS

2017-08-01 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14721:
---

 Summary: Add StreamCapabilities support to Aliyun OSS
 Key: HADOOP-14721
 URL: https://issues.apache.org/jira/browse/HADOOP-14721
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/oss
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14720) Add StreamCapabilities support to Swift

2017-08-01 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14720:
---

 Summary: Add StreamCapabilities support to Swift
 Key: HADOOP-14720
 URL: https://issues.apache.org/jira/browse/HADOOP-14720
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/swift
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14719) Add StreamCapabilities support to WASB

2017-08-01 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14719:
---

 Summary: Add StreamCapabilities support to WASB
 Key: HADOOP-14719
 URL: https://issues.apache.org/jira/browse/HADOOP-14719
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14718) Add StreamCapabilities support to ADLS

2017-08-01 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14718:
---

 Summary: Add StreamCapabilities support to ADLS
 Key: HADOOP-14718
 URL: https://issues.apache.org/jira/browse/HADOOP-14718
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14712) Document support for AWS Snowball

2017-08-01 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14712:
---

 Summary: Document support for AWS Snowball
 Key: HADOOP-14712
 URL: https://issues.apache.org/jira/browse/HADOOP-14712
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/s3
Affects Versions: 2.8.0
 Environment: Document Hadoop support for AWS Snowball:
* Commands and parameters
* Performance tuning
* Caveats
* Troubleshooting
Reporter: John Zhuge






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14711) Test data transfer between Hadoop and AWS Snowball

2017-08-01 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14711:
---

 Summary: Test data transfer between Hadoop and AWS Snowball
 Key: HADOOP-14711
 URL: https://issues.apache.org/jira/browse/HADOOP-14711
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 2.8.0
Reporter: John Zhuge


Test data transfer between Hadoop and AWS Snowball:
* fs -cp
* DistCp
* Scale tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14710) Uber-JIRA: Support AWS Snowball

2017-08-01 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14710:
---

 Summary: Uber-JIRA: Support AWS Snowball
 Key: HADOOP-14710
 URL: https://issues.apache.org/jira/browse/HADOOP-14710
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


Support data transfer between Hadoop and [AWS 
Snowball|http://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html].





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-07-31 Thread John Zhuge
Just confirmed that HADOOP-13707 does fix the NN servlet issuet in
branch-2.7.

On Mon, Jul 31, 2017 at 11:38 AM, Konstantin Shvachko 
wrote:

> Hi John,
>
> Thank you for checking and voting.
> As far as I know test failures on 2.7.4 are intermittent. We have a jira
> for that
> https://issues.apache.org/jira/browse/HDFS-11985
> but decided it should not block the release.
> The "dr,who" thing is a configuration issue. This page may be helpful:
> http://hadoop.apache.org/docs/stable/hadoop-hdfs-httpfs/ServerSetup.html
>
> Thanks,
> --Konstantin
>
> On Sun, Jul 30, 2017 at 11:24 PM, John Zhuge  wrote:
>
>> Hi Konstantin,
>>
>> Thanks a lot for the effort to prepare the 2.7.4-RC0 release!
>>
>> +1 (non-binding)
>>
>>- Verified checksums and signatures of all tarballs
>>- Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
>>- Verified cloud connectors:
>>   - All S3A integration tests
>>- Deployed both binary and built source to a pseudo cluster, passed
>>the following sanity tests in insecure, SSL, and SSL+Kerberos mode:
>>   - HDFS basic and ACL
>>   - DistCp basic
>>   - MapReduce wordcount (only failed in SSL+Kerberos mode for binary
>>   tarball, probably unrelated)
>>   - KMS and HttpFS basic
>>   - Balancer start/stop
>>
>> Shall we worry this test failures? Likely fixed by
>> https://issues.apache.org/jira/browse/HADOOP-13707.
>>
>>- Got “curl: (22) The requested URL returned error: 403 User dr.who
>>is unauthorized to access this page.” when accessing NameNode web servlet
>>/jmx, /conf, /logLevel, and /stacks. It passed in branch-2.8.
>>
>>
>> On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachko <
>> shv.had...@gmail.com> wrote:
>>
>>> Hi everybody,
>>>
>>> Here is the next release of Apache Hadoop 2.7 line. The previous stable
>>> release 2.7.3 was available since 25 August, 2016.
>>> Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are
>>> critical bug fixes and major optimizations. See more details in Release
>>> Note:
>>> http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html
>>>
>>> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/
>>>
>>> Please give it a try and vote on this thread. The vote will run for 5
>>> days
>>> ending 08/04/2017.
>>>
>>> Please note that my up to date public key are available from:
>>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>>> Please don't forget to refresh the page if you've been there recently.
>>> There are other place on Apache sites, which may contain my outdated key.
>>>
>>> Thanks,
>>> --Konstantin
>>>
>>
>>
>>
>> --
>> John
>>
>
>


-- 
John


Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-07-30 Thread John Zhuge
Hi Konstantin,

Thanks a lot for the effort to prepare the 2.7.4-RC0 release!

+1 (non-binding)

   - Verified checksums and signatures of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Verified cloud connectors:
  - All S3A integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (only failed in SSL+Kerberos mode for binary
  tarball, probably unrelated)
  - KMS and HttpFS basic
  - Balancer start/stop

Shall we worry this test failures? Likely fixed by
https://issues.apache.org/jira/browse/HADOOP-13707.

   - Got “curl: (22) The requested URL returned error: 403 User dr.who is
   unauthorized to access this page.” when accessing NameNode web servlet
   /jmx, /conf, /logLevel, and /stacks. It passed in branch-2.8.


On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachko 
wrote:

> Hi everybody,
>
> Here is the next release of Apache Hadoop 2.7 line. The previous stable
> release 2.7.3 was available since 25 August, 2016.
> Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are
> critical bug fixes and major optimizations. See more details in Release
> Note:
> http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 08/04/2017.
>
> Please note that my up to date public key are available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> Please don't forget to refresh the page if you've been there recently.
> There are other place on Apache sites, which may contain my outdated key.
>
> Thanks,
> --Konstantin
>



-- 
John


[jira] [Created] (HADOOP-14695) Allow disabling chunked encoding

2017-07-28 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14695:
---

 Summary: Allow disabling chunked encoding
 Key: HADOOP-14695
 URL: https://issues.apache.org/jira/browse/HADOOP-14695
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


[Using the Amazon S3 Adapter for Snowball 
|http://docs.aws.amazon.com/snowball/latest/ug/using-adapter.html] indicates 
that we need to disable chunked coding and set path style access.

HADOOP-12963 enables setting path style access.

This JIRA will enable disabling chunked encoding. A new property 
{{fs.s3a.disable.chunked.encoding}} is proposed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Should ToolRunner call UserGroupInformation.setConfiguration?

2017-07-26 Thread John Zhuge
The static UGI.conf is set to a new Configuration object in
UGI.ensureInitialized if setConfiguration is not already called.


ToolRunner.conf does take -D overrides but it is not copied to UGI.conf.

On Wed, Jul 26, 2017 at 8:42 PM, John Zhuge  wrote:

> Hi Gurus,
>
> Unlike YarnChild, DN, NN, and many others, ToolRunner does not
> call UserGroupInformation.setConfiguration. Is this by design?
>
> The means you can not use "-D" to override any conf property used in UGI.
> The only direct call I found is:
>
> String cmd = conf.get("hadoop.kerberos.kinit.command", "kinit");
>
>
> Thanks,
> --
> John
>



-- 
John


Should ToolRunner call UserGroupInformation.setConfiguration?

2017-07-26 Thread John Zhuge
Hi Gurus,

Unlike YarnChild, DN, NN, and many others, ToolRunner does not
call UserGroupInformation.setConfiguration. Is this by design?

The means you can not use "-D" to override any conf property used in UGI.
The only direct call I found is:

String cmd = conf.get("hadoop.kerberos.kinit.command", "kinit");


Thanks,
-- 
John


[jira] [Created] (HADOOP-14679) Obtain ADLS access token provider type from credential provider

2017-07-22 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14679:
---

 Summary: Obtain ADLS access token provider type from credential 
provider
 Key: HADOOP-14679
 URL: https://issues.apache.org/jira/browse/HADOOP-14679
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


Found it convenient to add {{fs.adl.oauth2.access.token.provider.type}} along 
with ADLS credentials to the credential store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14678) AdlFilesystem#initialize swallows exception when getting user name

2017-07-22 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14678:
---

 Summary: AdlFilesystem#initialize swallows exception when getting 
user name
 Key: HADOOP-14678
 URL: https://issues.apache.org/jira/browse/HADOOP-14678
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


https://github.com/apache/hadoop/blob/5c61ad24887f76dfc5a5935b2c5dceb6bfd99417/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java#L122

It should log the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-05 Thread John Zhuge
+1 (non-binding)


   - Verified checksums and signatures of the tarballs
   - Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5
   - Cloud connectors:
  - A few S3A integration tests
  - A few ADL live unit tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - WordCount (skipped in Kerberos mode)
  - KMS and HttpFS basic

Thanks Andrew for the great effort!

On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne 
wrote:

> Thanks Andrew.
> I downloaded the source, built it, and installed it onto a pseudo
> distributed 4-node cluster.
>
> I ran mapred and streaming test cases, including sleep and wordcount.
> +1 (non-binding)
> -Eric
>
>   From: Andrew Wang 
>  To: "common-dev@hadoop.apache.org" ; "
> hdfs-...@hadoop.apache.org" ; "
> mapreduce-...@hadoop.apache.org" ; "
> yarn-...@hadoop.apache.org" 
>  Sent: Thursday, June 29, 2017 9:41 PM
>  Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
>
> Hi all,
>
> As always, thanks to the many, many contributors who helped with this
> release! I've prepared an RC0 for 3.0.0-alpha4:
>
> http://home.apache.org/~wang/3.0.0-alpha4-RC0/
>
> The standard 5-day vote would run until midnight on Tuesday, July 4th.
> Given that July 4th is a holiday in the US, I expect this vote might have
> to be extended, but I'd like to close the vote relatively soon after.
>
> I've done my traditional testing of a pseudo-distributed cluster with a
> single task pi job, which was successful.
>
> Normally my testing would end there, but I'm slightly more confident this
> time. At Cloudera, we've successfully packaged and deployed a snapshot from
> a few days ago, and run basic smoke tests. Some bugs found from this
> include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and
> the revert of HDFS-11696, which broke NN QJM HA setup.
>
> Vijay is working on a test run with a fuller test suite (the results of
> which we can hopefully post soon).
>
> My +1 to start,
>
> Best,
> Andrew
>
>
>
>



-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-05 Thread John Zhuge
False alarm, fixed the build issue with "mvn -U clean install".

On Wed, Jul 5, 2017 at 6:08 PM, John Zhuge  wrote:

> For some reason, I can't build the source.
>
> Got this when running "mvn install -U" inside directory
> "hadoop-maven-plugins":
>
> [ERROR] Failed to execute goal org.apache.maven.plugins:
> maven-remote-resources-plugin:1.5:process (default) on project
> hadoop-maven-plugins: Execution default of goal org.apache.maven.plugins:
> maven-remote-resources-plugin:1.5:process failed: Plugin
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of its
> dependencies could not be resolved: Failed to collect dependencies at
> org.apache.maven.plugins:maven-remote-resources-plugin:jar:1.5 ->
> org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha4: Failed to read
> artifact descriptor for org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha4:
> Failure to find org.apache.hadoop:hadoop-main:pom:3.0.0-alpha4 in
> https://repo.maven.apache.org/maven2 was cached in the local repository,
> resolution will not be reattempted until the update interval of central has
> elapsed or updates are forced -> [Help 1]
>
>
> On Thu, Jun 29, 2017 at 7:40 PM, Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> As always, thanks to the many, many contributors who helped with this
>> release! I've prepared an RC0 for 3.0.0-alpha4:
>>
>> http://home.apache.org/~wang/3.0.0-alpha4-RC0/
>>
>> The standard 5-day vote would run until midnight on Tuesday, July 4th.
>> Given that July 4th is a holiday in the US, I expect this vote might have
>> to be extended, but I'd like to close the vote relatively soon after.
>>
>> I've done my traditional testing of a pseudo-distributed cluster with a
>> single task pi job, which was successful.
>>
>> Normally my testing would end there, but I'm slightly more confident this
>> time. At Cloudera, we've successfully packaged and deployed a snapshot
>> from
>> a few days ago, and run basic smoke tests. Some bugs found from this
>> include HDFS-11956, which fixes backwards compat with Hadoop 2 clients,
>> and
>> the revert of HDFS-11696, which broke NN QJM HA setup.
>>
>> Vijay is working on a test run with a fuller test suite (the results of
>> which we can hopefully post soon).
>>
>> My +1 to start,
>>
>> Best,
>> Andrew
>>
>
>
>
> --
> John
>



-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-05 Thread John Zhuge
For some reason, I can't build the source.

Got this when running "mvn install -U" inside directory
"hadoop-maven-plugins":

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process
(default) on project hadoop-maven-plugins: Execution default of goal
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed:
Plugin org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of
its dependencies could not be resolved: Failed to collect dependencies at
org.apache.maven.plugins:maven-remote-resources-plugin:jar:1.5 ->
org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha4: Failed to read
artifact descriptor for
org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha4: Failure to find
org.apache.hadoop:hadoop-main:pom:3.0.0-alpha4 in
https://repo.maven.apache.org/maven2 was cached in the local repository,
resolution will not be reattempted until the update interval of central has
elapsed or updates are forced -> [Help 1]


On Thu, Jun 29, 2017 at 7:40 PM, Andrew Wang 
wrote:

> Hi all,
>
> As always, thanks to the many, many contributors who helped with this
> release! I've prepared an RC0 for 3.0.0-alpha4:
>
> http://home.apache.org/~wang/3.0.0-alpha4-RC0/
>
> The standard 5-day vote would run until midnight on Tuesday, July 4th.
> Given that July 4th is a holiday in the US, I expect this vote might have
> to be extended, but I'd like to close the vote relatively soon after.
>
> I've done my traditional testing of a pseudo-distributed cluster with a
> single task pi job, which was successful.
>
> Normally my testing would end there, but I'm slightly more confident this
> time. At Cloudera, we've successfully packaged and deployed a snapshot from
> a few days ago, and run basic smoke tests. Some bugs found from this
> include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and
> the revert of HDFS-11696, which broke NN QJM HA setup.
>
> Vijay is working on a test run with a fuller test suite (the results of
> which we can hopefully post soon).
>
> My +1 to start,
>
> Best,
> Andrew
>



-- 
John


[jira] [Created] (HADOOP-14608) KMS JMX servlet path not backwards compatible

2017-06-28 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14608:
---

 Summary: KMS JMX servlet path not backwards compatible
 Key: HADOOP-14608
 URL: https://issues.apache.org/jira/browse/HADOOP-14608
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


HADOOP-13597 switched KMS from Tomcat to Jetty. The implementation changed JMX 
path from /kms/jmx to /jmx, which is inline with other HttpServer2 based 
servlets.

If there is a desire for the same JMX path, please vote here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14519) Client$Connection#waitForWork may suffer spurious wakeup

2017-06-09 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14519:
---

 Summary: Client$Connection#waitForWork may suffer spurious wakeup
 Key: HADOOP-14519
 URL: https://issues.apache.org/jira/browse/HADOOP-14519
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Critical


{{Client$Connection#waitForWork}} may suffer spurious wakeup because the 
{{wait}} is not surrounded by a loop. See 
[https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#wait()].

{code:title=Client$Connection#waitForWork}
  if (calls.isEmpty() && !shouldCloseConnection.get() && running.get())  {
long timeout = maxIdleTime-
  (Time.now()-lastActivity.get());
if (timeout>0) {
  try {
wait(timeout);  <<<<<< spurious wakeup
  } catch (InterruptedException e) {}
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14471) Upgrade Jetty to latest version

2017-05-31 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14471:
---

 Summary: Upgrade Jetty to latest version
 Key: HADOOP-14471
 URL: https://issues.apache.org/jira/browse/HADOOP-14471
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge
Assignee: John Zhuge


The current Jetty version is {{9.3.11.v20160721}}. Should we upgrade it to the 
latest 9.3.x which is {{9.3.19.v20170502}}? Or 9.4?

9.3.x changes: 
https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/VERSION.txt

9.4.x changes:
https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14464) hadoop-aws doc header warning #5 line wrapped

2017-05-27 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14464:
---

 Summary: hadoop-aws doc header warning #5 line wrapped
 Key: HADOOP-14464
 URL: https://issues.apache.org/jira/browse/HADOOP-14464
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Trivial


The line was probably automatically wrapped by the editor:
{code}
Warning #5: The S3 client provided by Amazon EMR are not from the Apache
Software foundation, and are only supported by Amazon.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14421) TestAdlFileSystemContractLive#testListStatus assertion failed

2017-05-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14421.
-
   Resolution: Duplicate
Fix Version/s: 2.8.1

Sorry for the false alarm. This issue is fixed by HADOOP-14230.

> TestAdlFileSystemContractLive#testListStatus assertion failed
> -
>
> Key: HADOOP-14421
> URL: https://issues.apache.org/jira/browse/HADOOP-14421
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>    Reporter: John Zhuge
>Assignee: Atul Sikaria
> Fix For: 2.8.1
>
>
> TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:273
>  expected:<1> but was:<11>
> {noformat}
> Tests run: 32, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.118 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
> testListStatus(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)  
> Time elapsed: 0.518 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<1> but was:<11>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:60)
> {noformat}
> This is the first time we saw the issue. The test store {{rwj2dm}} was 
> created on the fly and destroyed after the test.
> The code base does not have HADOOP-14230 which cleans up the test dir better. 
> Trying to determine whether this might help.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14435) TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed

2017-05-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reopened HADOOP-14435:
-

Great idea!

Re-opened this as a doc JIRA to add a new section {{Troubleshooting}} to 
{{index.md}}. Document what I encountered here.

> TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed
> --
>
> Key: HADOOP-14435
> URL: https://issues.apache.org/jira/browse/HADOOP-14435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/adl
>Affects Versions: 2.9.0, 3.0.0-alpha3
>    Reporter: John Zhuge
>Assignee: John Zhuge
>
> Saw the following assertion failure in branch-2 and trunk:
> {noformat}
> Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 80.189 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
> testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
>   Time elapsed: 0.71 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<461> but was:<456>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:219)
>   at junit.framework.Assert.assertEquals(Assert.java:226)
>   at junit.framework.TestCase.assertEquals(TestCase.java:392)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testMkdirsWithUmask(FileSystemContractBaseTest.java:242)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:59)
> Results :
> Failed tests:
>   
> TestAdlFileSystemContractLive.runTest:59->FileSystemContractBaseTest.testMkdirsWithUmask:242
>  expected:<461> but was:<456>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14435) TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed

2017-05-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14435.
-
  Resolution: Not A Bug
Release Note: The "Other" entry in the default permissions of the ADL store 
can impact the file system contract test expecting certain permissions.

> TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed
> --
>
> Key: HADOOP-14435
> URL: https://issues.apache.org/jira/browse/HADOOP-14435
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Saw the following assertion failure in branch-2 and trunk:
> {noformat}
> Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 80.189 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
> testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
>   Time elapsed: 0.71 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<461> but was:<456>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:219)
>   at junit.framework.Assert.assertEquals(Assert.java:226)
>   at junit.framework.TestCase.assertEquals(TestCase.java:392)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testMkdirsWithUmask(FileSystemContractBaseTest.java:242)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:59)
> Results :
> Failed tests:
>   
> TestAdlFileSystemContractLive.runTest:59->FileSystemContractBaseTest.testMkdirsWithUmask:242
>  expected:<461> but was:<456>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14435) TestAdlFileSystemContractLive#testMkdirsWithUmask assertion failed

2017-05-17 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14435:
---

 Summary: TestAdlFileSystemContractLive#testMkdirsWithUmask 
assertion failed
 Key: HADOOP-14435
 URL: https://issues.apache.org/jira/browse/HADOOP-14435
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.9.0, 3.0.0-alpha3
Reporter: John Zhuge
Assignee: John Zhuge


{noformat}
Tests run: 43, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 80.189 sec <<< 
FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
testMkdirsWithUmask(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
  Time elapsed: 0.71 sec  <<< FAILURE!
junit.framework.AssertionFailedError: expected:<461> but was:<456>
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.failNotEquals(Assert.java:329)
at junit.framework.Assert.assertEquals(Assert.java:78)
at junit.framework.Assert.assertEquals(Assert.java:219)
at junit.framework.Assert.assertEquals(Assert.java:226)
at junit.framework.TestCase.assertEquals(TestCase.java:392)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testMkdirsWithUmask(FileSystemContractBaseTest.java:242)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:59)


Results :

Failed tests:
  
TestAdlFileSystemContractLive.runTest:59->FileSystemContractBaseTest.testMkdirsWithUmask:242
 expected:<461> but was:<456>
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14421) TestAdlFileSystemContractLive#testListStatus assertion failed

2017-05-14 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14421:
---

 Summary: TestAdlFileSystemContractLive#testListStatus assertion 
failed
 Key: HADOOP-14421
 URL: https://issues.apache.org/jira/browse/HADOOP-14421
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: Atul Sikaria


TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:273
 expected:<1> but was:<11>
{noformat}
Tests run: 32, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.118 sec <<< 
FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
testListStatus(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)  
Time elapsed: 0.518 sec  <<< FAILURE!
junit.framework.AssertionFailedError: expected:<1> but was:<11>
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.failNotEquals(Assert.java:329)
at junit.framework.Assert.assertEquals(Assert.java:78)
at junit.framework.Assert.assertEquals(Assert.java:234)
at junit.framework.Assert.assertEquals(Assert.java:241)
at junit.framework.TestCase.assertEquals(TestCase.java:409)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:273)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.runTest(TestAdlFileSystemContractLive.java:60)
{noformat}

This is the first time we saw the issue. The test store {{rwj2dm}} was created 
on the fly and destroyed after the test.

The code base does not have HADOOP-14230 which cleans up the test dir better. 
Trying to determine whether this might help.  




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14417) Update cipher list for KMS

2017-05-12 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14417:
---

 Summary: Update cipher list for KMS
 Key: HADOOP-14417
 URL: https://issues.apache.org/jira/browse/HADOOP-14417
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms, security
Affects Versions: 2.9.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


In Oracle Linux 6.8 configurations, the curl command cannot connect to certain 
CDH services that run on Apache Tomcat when the cluster has been configured for 
TLS/SSL. Specifically, HttpFS, KMS, Oozie, and Solr services reject connection 
attempts because the default cipher configuration uses weak temporary server 
keys (based on Diffie-Hellman key exchange protocol).

https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_os_ki.html#tls_weak_ciphers_rejected_by_oracle_linux_6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14352) Make some HttpServer2 SSL properties optional

2017-04-25 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14352:
---

 Summary: Make some HttpServer2 SSL properties optional
 Key: HADOOP-14352
 URL: https://issues.apache.org/jira/browse/HADOOP-14352
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


{{HttpServer2#loadSSLConfiguration}} loads 5 SSL properties but only keystore 
location and password are required, the rest of them, keystore keypassword, 
truststore location, and truststore password, can be optional.

According to 
http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html:
* If there is no keymanagerpassword, then the keystorepassword is used instead.
* Trust store is typically set to the same path as the keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14347) Make KMS Jetty connection backlog configurable

2017-04-22 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14347:
---

 Summary: Make KMS Jetty connection backlog configurable
 Key: HADOOP-14347
 URL: https://issues.apache.org/jira/browse/HADOOP-14347
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge


HADOOP-14003 enabled the customization of Tomcat attribute {{protocol}}, 
{{acceptCount}}, and {{acceptorThreadCount}} for KMS in branch-2. See 
https://tomcat.apache.org/tomcat-6.0-doc/config/http.html.

KMS switched from Tomcat to Jetty in trunk. Only {{acceptCount}} has a 
counterpart in Jetty, {{acceptQueueSize}}. See 
http://www.eclipse.org/jetty/documentation/9.3.x/configuring-connectors.html.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14344) Revert HADOOP-13606 swift FS to add a service load metadata file

2017-04-21 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14344.
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   2.9.0

The revert patch is in HADOOP-13606: 
https://issues.apache.org/jira/secure/attachment/12856766/HADOOP-13606.002.patch

> Revert HADOOP-13606 swift FS to add a service load metadata file
> 
>
> Key: HADOOP-14344
> URL: https://issues.apache.org/jira/browse/HADOOP-14344
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.8.0
>    Reporter: John Zhuge
>    Assignee: John Zhuge
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
>
> Create the revert JIRA for release notes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14344) Revert HADOOP-13606 swift FS to add a service load metadata file

2017-04-21 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14344:
---

 Summary: Revert HADOOP-13606 swift FS to add a service load 
metadata file
 Key: HADOOP-14344
 URL: https://issues.apache.org/jira/browse/HADOOP-14344
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


As titled



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list

2017-04-21 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14341:
---

 Summary: Support multi-line value for 
ssl.server.exclude.cipher.list
 Key: HADOOP-14341
 URL: https://issues.apache.org/jira/browse/HADOOP-14341
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.4
Reporter: John Zhuge
Assignee: John Zhuge


The multi-line value for {{ssl.server.exclude.cipher.list}} shown in 
{{ssl-server.xml.exmple}} does not work. The property value
{code}

  ssl.server.exclude.cipher.list
  TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
  SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_RC4_128_MD5
  Optional. The weak security cipher suites that you want excluded
  from SSL communication.

{code}
is actually parsed into:
* "TLS_ECDHE_RSA_WITH_RC4_128_SHA"
* "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA"
* "\nSSL_RSA_WITH_DES_CBC_SHA"
* "SSL_DHE_RSA_WITH_DES_CBC_SHA"
* "\nSSL_RSA_EXPORT_WITH_RC4_40_MD5"
* "SSL_RSA_EXPORT_WITH_DES40_CBC_SHA"
* "\nSSL_RSA_WITH_RC4_128_MD5"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14340) Enable KMS and HttpFS to exclude weak SSL ciphers

2017-04-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14340:
---

 Summary: Enable KMS and HttpFS to exclude weak SSL ciphers
 Key: HADOOP-14340
 URL: https://issues.apache.org/jira/browse/HADOOP-14340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


HADOOP-12668 added {{HttpServer2$Builder#excludeCiphers}} to exclude SSL 
ciphers. Enable KMS and HttpFS to use this feature by modifying 
{{HttpServer2$Builder#loadSSLConfiguration}} calld by both.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14241) Add ADLS sensitive config keys to default list

2017-04-19 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reopened HADOOP-14241:
-

Reopen to run pre-commit

> Add ADLS sensitive config keys to default list
> --
>
> Key: HADOOP-14241
> URL: https://issues.apache.org/jira/browse/HADOOP-14241
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/adl, security
>Affects Versions: 2.8.0
>    Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14241.001.patch, HADOOP-14241.002.patch, 
> HADOOP-14241.branch-2.002.patch
>
>
> ADLS sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret

2017-04-17 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14317:
---

 Summary: KMSWebServer$deprecateEnv may leak secret
 Key: HADOOP-14317
 URL: https://issues.apache.org/jira/browse/HADOOP-14317
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, security
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge


May print secret in warning message:
{code}
LOG.warn("Environment variable {} = '{}' is deprecated and overriding"
+ " property {} = '{}', please set the property in {} instead.",
varName, value, propName, propValue, confFile);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14151) Swift treats 0-len file as directory

2017-04-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14151.
-
Resolution: Duplicate

> Swift treats 0-len file as directory
> 
>
> Key: HADOOP-14151
> URL: https://issues.apache.org/jira/browse/HADOOP-14151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 3.0.0-alpha3
>    Reporter: John Zhuge
>
> Unit test {{TestSwiftContractRootDir#testRmNonEmptyRootDirNonRecursive}} 
> fails at {{assertIsFile(file)}}. This leads me to suspect swift treats 0-len 
> file as directory. Confirmed by the following experiment:
> {noformat}
> $ ls -l /tmp/zero /tmp/abc
> -rw-rw-r--  1 jzhuge  wheel  4 Mar  7 13:19 /tmp/abc
> -rw-rw-r--  1 jzhuge  wheel  0 Mar  7 13:19 /tmp/zero
> $ bin/hadoop fs -put /tmp/zero /tmp/abc swift://jzswift.rackspace/
> 2017-03-07 13:24:09,321 INFO snative.SwiftNativeFileSystemStore: mv  
> jzswift/zero._COPYING_ swift://jzswift.rackspace/zero
> $ bin/hadoop fs -touchz swift://jzswift.rackspace/touchz
> $ bin/hadoop fs -ls swift://jzswift.rackspace/
> Found 3 items
> -rw-rw-rw-   1  4 2017-03-07 13:36 swift://jzswift.rackspace/abc
> drwxrwxrwx   -  0 2017-03-07 13:28 swift://jzswift.rackspace/touchz
> drwxrwxrwx   -  0 2017-03-07 13:32 swift://jzswift.rackspace/zero
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Local trunk build is flaky

2017-04-17 Thread John Zhuge
This command works for me (similar to Eric's):
mvn clean && mvn install package -Pdist -Dtar -DskipTests -DskipShade
-Dmaven.javadoc.skip


$ mvn -v
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
2015-11-10T08:41:47-08:00)
Maven home: /Users/jzhuge/apache-maven-3.3.9
Java version: 1.8.0_66, vendor: Oracle Corporation
Java home:
/Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.12.4", arch: "x86_64", family: "mac"


On Mon, Apr 17, 2017 at 10:46 AM, Zhe Zhang  wrote:

> Thanks Eric. Same build command failed for me.
>
> On Mon, Apr 17, 2017 at 10:38 AM Eric Badger 
> wrote:
>
> > For what it's worth, I successfully built trunk just now on macOS Sierra
> > using mvn install -Pdist -Dtar -DskipTests -DskipShade -Dmaven
> > .javadoc.skip
> >
> >
> >
> > On Monday, April 17, 2017 12:32 PM, Zhe Zhang  wrote:
> >
> >
> > Starting from last week, building trunk on my local Mac has been flaky. I
> > haven't tried Linux yet. The error is:
> >
> > [ERROR] Failed to execute goal
> > org.apache.maven.plugins:maven-enforcer-plugin:1.4.1:enforce (clean) on
> > project hadoop-assemblies: Some Enforcer rules have failed. Look above
> for
> > specific messages explaining why the rule failed. -> [Help 1]
> > org.apache.maven.lifecycle.LifecycleExecutionException: Failed to
> execute
> > goal org.apache.maven.plugins:maven-enforcer-plugin:1.4.1:enforce
> (clean)
> > on project hadoop-assemblies: Some Enforcer rules have failed. Look above
> > for specific messages explaining why the rule failed.
> > at
> >
> > org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:217)
> > ...
> >
> > Anyone else seeing the same issue?
> >
> > Thanks,
> >
> > --
> > Zhe Zhang
> > Apache Hadoop Committer
> > http://zhe-thoughts.github.io/about/ | @oldcap
> >
> >
> > --
> Zhe Zhang
> Apache Hadoop Committer
> http://zhe-thoughts.github.io/about/ | @oldcap
>



-- 
John


[jira] [Resolved] (HADOOP-14292) Transient TestAdlContractRootDirLive failure

2017-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14292.
-
Resolution: Not A Problem
  Assignee: Atul Sikaria  (was: Vishwajeet Dusane)

Thanks [~snehav]!  {{bobdir}} probably didn't have the permission for this test 
case to pass. This test case expects a clean account.

Filed HADOOP-14304 so that the path will not be swallowed when a remote 
exception occurs.

> Transient TestAdlContractRootDirLive failure
> 
>
> Key: HADOOP-14292
> URL: https://issues.apache.org/jira/browse/HADOOP-14292
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: Atul Sikaria
>
> Got the test failure once, but could not reproduce it the second time. Maybe 
> a transient ADLS error?
> {noformat}
> Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.641 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive
> testRecursiveRootListing(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive)
>   Time elapsed: 3.841 sec  <<< ERROR!
> org.apache.hadoop.security.AccessControlException: LISTSTATUS failed with 
> error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource 
> does not exist or the user is not authorized to perform the requested 
> operation.). 
> [db432517-4060-4d96-9aad-7309f8469489][2017-04-07T10:24:54.1708810-07:00]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.getRemoteException(ADLStoreClient.java:1144)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1106)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectoryInternal(ADLStoreClient.java:527)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:504)
>   at 
> com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:368)
>   at 
> org.apache.hadoop.fs.adl.AdlFileSystem.listStatus(AdlFileSystem.java:473)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1824)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1866)
>   at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:2028)
>   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2027)
>   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2010)
>   at 
> org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2168)
>   at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2145)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.(ContractTestUtils.java:1252)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:219)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14304) AdlStoreClient#getExceptionFromResponse should not swallow defaultMessage

2017-04-13 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14304:
---

 Summary: AdlStoreClient#getExceptionFromResponse should not 
swallow defaultMessage
 Key: HADOOP-14304
 URL: https://issues.apache.org/jira/browse/HADOOP-14304
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge


Discovered the issue in HADOOP-14292.

In {{AdlStoreClient}}, {{enumerateDirectoryInternal}} called 
{{getExceptionFromResponse}} with {{defaultMessage}} set to {{"Error 
enumerating directory " + path}}. This useful message was swallowed at 
https://github.com/Azure/azure-data-lake-store-java/blob/2.1.4/src/main/java/com/microsoft/azure/datalake/store/ADLStoreClient.java#L1106.

Actually {{getExceptionFromResponse}} swallows {{defaultMessage}} at several 
places. Suggest always displaying the {{defaultMessage}} in some way.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14294) Rename ADLS mountpoint properties

2017-04-07 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14294:
---

 Summary: Rename ADLS mountpoint properties
 Key: HADOOP-14294
 URL: https://issues.apache.org/jira/browse/HADOOP-14294
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


Follow up to HADOOP-14038. Rename the prefix of 
{{dfs.adls..mountpoint}} and {{dfs.adls..hostname}} to 
{{fs.adl.}}.

Borrow code from 
https://issues.apache.org/jira/secure/attachment/12857500/HADOOP-14038.006.patch
 and add a few unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14292) Transient TestAdlContractRootDirLive failure

2017-04-07 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14292:
---

 Summary: Transient TestAdlContractRootDirLive failure
 Key: HADOOP-14292
 URL: https://issues.apache.org/jira/browse/HADOOP-14292
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 3.0.0-alpha3
Reporter: John Zhuge
Assignee: Vishwajeet Dusane


Got the test failure once, but could not reproduce it the second time. Maybe a 
transient ADLS error?
{noformat}
Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.641 sec <<< 
FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive
testRecursiveRootListing(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive)
  Time elapsed: 3.841 sec  <<< ERROR!
org.apache.hadoop.security.AccessControlException: LISTSTATUS failed with error 
0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not 
exist or the user is not authorized to perform the requested operation.). 
[db432517-4060-4d96-9aad-7309f8469489][2017-04-07T10:24:54.1708810-07:00]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at 
com.microsoft.azure.datalake.store.ADLStoreClient.getRemoteException(ADLStoreClient.java:1144)
at 
com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1106)
at 
com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectoryInternal(ADLStoreClient.java:527)
at 
com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:504)
at 
com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:368)
at 
org.apache.hadoop.fs.adl.AdlFileSystem.listStatus(AdlFileSystem.java:473)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1824)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1866)
at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:2028)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2027)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2010)
at 
org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2168)
at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2145)
at 
org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.(ContractTestUtils.java:1252)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:219)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14243) Add S3A sensitive config keys to default list

2017-04-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-14243.
-
Resolution: Not A Problem

{{fs.s3a.secret.key}} already on the default list.
{{fs.s3a.access.key}} is not on the default list by design.

> Add S3A sensitive config keys to default list
> -
>
> Key: HADOOP-14243
> URL: https://issues.apache.org/jira/browse/HADOOP-14243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, security
>Affects Versions: 2.8.0
>    Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> S3A sensitive credential config keys should be added to the default list for 
> {{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14259) Verify viewfs works with ADLS

2017-03-30 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14259:
---

 Summary: Verify viewfs works with ADLS
 Key: HADOOP-14259
 URL: https://issues.apache.org/jira/browse/HADOOP-14259
 Project: Hadoop Common
  Issue Type: Test
  Components: fs/adl, viewfs
Affects Versions: 2.8.0
Reporter: John Zhuge
Priority: Minor


Many clusters can share a single ADL store as the default filesystem. In order 
to prevent directories of the same names but from different clusters to 
collide, use viewfs over ADLS filesystem: 
* Set {{fs.defaultFS}} to {{viewfs://clusterX}} for cluster X
* Set {{fs.defaultFS}} to {{viewfs://clusterY}} for cluster Y
* The viewfs client mount table should have entry clusterX and ClusterY

Tasks
* Verify all filesystem operations work as expected, especially rename and 
concat
* Verify homedir entry works




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14258) Verify and document ADLS client mount table feature

2017-03-30 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14258:
---

 Summary: Verify and document ADLS client mount table feature
 Key: HADOOP-14258
 URL: https://issues.apache.org/jira/browse/HADOOP-14258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Priority: Minor


ADLS connector supports a simple form of client mount table (chrooted) so that 
multiple clusters can share a single store as the default filesystem without 
sharing any directories. Verify and document this feature.

How to setup:
* Set property {{dfs.adls..hostname}} to 
{{.azuredatalakestore.net}}
* Set property {{dfs.adls..mountpoint}} to {{}}
* URI {{adl:///...}} will be translated to 
{{adl://.azuredatalakestore.net/}}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14251) Credential provider should handle property key deprecation

2017-03-28 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14251:
---

 Summary: Credential provider should handle property key deprecation
 Key: HADOOP-14251
 URL: https://issues.apache.org/jira/browse/HADOOP-14251
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge


The properties with old keys stored in a credential store can not be read via 
the new property keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Review request for HADOOP-14202

2017-03-27 Thread John Zhuge
Sure Allen. I will help.

On Mon, Mar 27, 2017 at 4:01 PM, Allen Wittenauer 
wrote:

>
> Hey gang.
>
> Could I get a quick review of HADOOP-14202?  This changes  a few
> things:
>
> * Makes the rest of the _USER vars consistent with the other
> changes in trunk (e.g., HADOOP_SECURE_DN_USER becomes
> HDFS_DATANODE_SECURE_USER)
> * deprecation warnings as necessary
> * cleans up more of the privileged code handling, adding more unit
> tests in the process
> * Optimizes a ton of code out of hadoop, mapred, hdfs, and yarn
> now that the jsvc vars are treated similarly across all four sub projects
> * Fixes quite a few shellcheck errors by adding shellcheck source
> lines
> * doc updates to reflect above
>
>
> Thanks.
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


-- 
John


[jira] [Created] (HADOOP-14243) Add S3A sensitive keys to default Hadoop sensitive keys

2017-03-25 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14243:
---

 Summary: Add S3A sensitive keys to default Hadoop sensitive keys
 Key: HADOOP-14243
 URL: https://issues.apache.org/jira/browse/HADOOP-14243
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


S3A credential sensitive keys should be added to the default list for 
hadoop.security.sensitive-config-keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14242) Configure KMS Tomcat SSL property sslEnabledProtocols

2017-03-25 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14242:
---

 Summary: Configure KMS Tomcat SSL property sslEnabledProtocols
 Key: HADOOP-14242
 URL: https://issues.apache.org/jira/browse/HADOOP-14242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge


Allow users to configure KMS Tomcat SSL property {{sslEnabledProtocols}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14241) Add ADLS credential keys to Hadoop sensitive key list

2017-03-25 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14241:
---

 Summary: Add ADLS credential keys to Hadoop sensitive key list
 Key: HADOOP-14241
 URL: https://issues.apache.org/jira/browse/HADOOP-14241
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


ADLS credential config keys should be added to the default value for 
{{hadoop.security.sensitive-config-keys}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14234) Improve ADLS FileSystem tests with JUnit4

2017-03-24 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14234:
---

 Summary: Improve ADLS FileSystem tests with JUnit4
 Key: HADOOP-14234
 URL: https://issues.apache.org/jira/browse/HADOOP-14234
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl, test
Affects Versions: 2.8.0
Reporter: John Zhuge
Priority: Minor


HADOOP-14180 switches FileSystem contract tests to JUnit4 and makes various 
enhancements. Improve ADLS FileSystem contract tests based on that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14230) TestAdlFileSystemContractLive fails to clean up

2017-03-24 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14230:
---

 Summary: TestAdlFileSystemContractLive fails to clean up
 Key: HADOOP-14230
 URL: https://issues.apache.org/jira/browse/HADOOP-14230
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl, test
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


TestAdlFileSystemContractLive fails to clean up test directories after the 
tests.

This is the leftover after {{testListStatus}}:
{nonformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/a
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/b
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:17 
/user/jzhuge/FileSystemContractBaseTest/testListStatus/c/1
{noformat}

This is the leftover after {{testMkdirsFailsForSubdirectoryOfExistingFile}}:
{noformat}
$ bin/hadoop fs -ls -R /
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 /user/jzhuge
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest
drwxr-xr-x   - ADLSAccessApp loginapp  0 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile
-rw-r--r--   1 ADLSAccessApp loginapp   2048 2017-03-24 08:22 
/user/jzhuge/FileSystemContractBaseTest/testMkdirsFailsForSubdirectoryOfExistingFile/file
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



  1   2   3   >