[jira] [Created] (HADOOP-18514) Remove the legacy Ozone website

2022-10-28 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HADOOP-18514:
--

 Summary: Remove the legacy Ozone website
 Key: HADOOP-18514
 URL: https://issues.apache.org/jira/browse/HADOOP-18514
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Arpit Agarwal


Let's remove the old Ozone website: https://hadoop.apache.org/ozone/

Since Ozone has moved to a separate TLP long ago with its own website.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Migrate hadoop from log4j1 to log4j2

2022-01-20 Thread Arpit Agarwal
Hi Duo,

Thank you for starting this discussion. Log4j1.2 bridge seems like a practical 
short-term solution. However the bridge will silently affect applications that 
add appenders or filters. NameNode audit logger and metrics come to mind. There 
may be others.

Thanks,
Arpit


> On Jan 20, 2022, at 5:55 AM, Duo Zhang  wrote:
> 
> There are 3 new CVEs for log4j1 reported recently[1][2][3]. So I think it
> is time to speed up the migration to log4j2 work[4] now.
> 
> You can see the discussion on the jira issue[4], our goal is to fully
> migrate to log4j2 and the current most blocking issue is lack of the
> "log4j.rootLogger=INFO,Console" grammer support for log4j2. I've already
> started a discussion thread on the log4j dev mailing list[5] and the result
> is optimistic and I've filed an issue for log4j2[6], but I do not think it
> could be addressed and released soon. If we want to fully migrate to
> log4j2, then either we introduce new environment variables or split the old
> HADOOP_ROOT_LOGGER variable in the startup scripts. And considering the
> complexity of our current startup scripts, the work is not easy and it will
> also break lots of other hadoop deployment systems if they do not use our
> startup scripts...
> 
> So after reconsidering the current situation, I prefer we use the log4j1.2
> bridge to remove the log4j1 dependency first, and once LOG4J2-3341 is
> addressed and released, we start to fully migrate to log4j2. Of course we
> have other problems for log4j1.2 bridge too, as we have TaskLogAppender,
> ContainerLogAppender and ContainerRollingLogAppender which inherit
> FileAppender and RollingFileAppender in log4j1, which are not part of the
> log4j1.2 bridge. But anyway, at least we could just copy the source code to
> hadoop as we have WriteAppender in log4j1.2 bridge, and these two classes
> do not have related CVEs.
> 
> Thoughts? For me I would like us to make a new 3.4.x release line to remove
> the log4j1 dependencies ASAP.
> 
> Thanks.
> 
> 1. https://nvd.nist.gov/vuln/detail/CVE-2022-23302
> 2. https://nvd.nist.gov/vuln/detail/CVE-2022-23305
> 3. https://nvd.nist.gov/vuln/detail/CVE-2022-23307
> 4. https://issues.apache.org/jira/browse/HADOOP-16206
> 5. https://lists.apache.org/thread/gvfb3jkg6t11cyds4jmpo7lrswmx28w3
> 6. https://issues.apache.org/jira/browse/LOG4J2-3341


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] which release lines should we still consider actively maintained?

2021-05-24 Thread Arpit Agarwal
+1 to EOL 3.1.x at least.


> On May 23, 2021, at 9:51 PM, Wei-Chiu Chuang  
> wrote:
> 
> Sean,
> 
> For reasons I don't understand, I never received emails from your new
> address in the mailing list. Only Akira's response.
> 
> I was just able to start a thread like this.
> 
> I am +1 to EOL 3.1.5.
> Reason? Spark is already on Hadoop 3.2. Hive and Tez are actively working
> to support Hadoop 3.3. HBase supports Hadoop 3.3 already. They are the most
> common Hadoop applications so I think a 3.1 isn't that necessarily
> important.
> 
> With Hadoop 3.3.1, we have a number of improvements to support a better
> HDFS upgrade experience, so upgrading from Hadoop 3.1 should be relatively
> easy. Application upgrade takes some effort though (commons-lang ->
> commons-lang3 migration for example)
> I've been maintaining the HDFS code in branch-3.1, so from a
> HDFS perspective the branch is always in a ready to release state.
> 
> The Hadoop 3.1 line is more than 3 years old. Maintaining this branch is
> getting trickier. I am +100 to reduce the number of actively maintained
> release line. IMO, 2 Hadoop 3 lines + 1 Hadoop 2 line is a good idea.
> 
> 
> 
> For Hadoop 3.3 line: If no one beats me, I plan to make a 3.3.2 in 2-3
> months. And another one in another 2-3 months.
> The Hadoop 3.3.1 has nearly 700 commits not in 3.3.0. It is very difficult
> to make/validate a maint release with such a big divergence in the code.
> 
> 
> On Mon, May 24, 2021 at 12:06 PM Akira Ajisaka  > wrote:
> 
>> Hi Sean,
>> 
>> Thank you for starting the discussion.
>> 
>> I think branch-2.10, branch-3.1, branch-3.2, branch-3.3, and trunk
>> (3.4.x) are actively maintained.
>> 
>> The next releases will be:
>> - 3.4.0
>> - 3.3.1 (Thanks, Wei-Chiu!)
>> - 3.2.3
>> - 3.1.5
>> - 2.10.2
>> 
>>> Are there folks willing to go through being release managers to get more
>> of these release lines on a steady cadence?
>> 
>> Now I'm interested in becoming a release manager of 3.1.5.
>> 
>>> If I were to take up maintenance release for one of them which should it
>> be?
>> 
>> 3.2.3 or 2.10.2 seems to be a good choice.
>> 
>>> Should we declare to our downstream users that some of these lines
>> aren’t going to get more releases?
>> 
>> Now I think we don't need to declare that. I believe 3.3.1, 3.2.3,
>> 3.1.5, and 2.10.2 will be released in the near future.
>> There are some earlier discussions of 3.1.x EoL, so 3.1.5 may be a
>> final release of the 3.1.x release line.
>> 
>>> Is there downstream facing documentation somewhere that I missed for
>> setting expectations about our release cadence and actively maintained
>> branches?
>> 
>> As you commented, the confluence wiki pages for Hadoop releases were
>> out of date. Updated [1].
>> 
>>> Do we have a backlog of work written up that could make the release
>> process easier for our release managers?
>> 
>> The release process is documented and maintained:
>> https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease
>> Also, there are some backlogs [1], [2].
>> 
>> [1]:
>> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Active+Release+Lines
>> [2]: https://cwiki.apache.org/confluence/display/HADOOP/Roadmap
>> 
>> Thanks,
>> Akira
>> 
>> On Fri, May 21, 2021 at 7:12 AM Sean Busbey 
>> wrote:
>>> 
>>> 
>>> Hi folks!
>>> 
>>> Which release lines do we as a community still consider actively
>> maintained?
>>> 
>>> I found an earlier discussion[1] where we had consensus to consider
>> branches that don’t get maintenance releases on a regular basis end-of-life
>> for practical purposes. The result of that discussion was written up in our
>> wiki docs in the “EOL Release Branches” page, summarized here
>>> 
 If no volunteer to do a maintenance release in a short to mid-term
>> (like 3 months to 1 or 1.5 year).
>>> 
>>> Looking at release lines that are still on our download page[3]:
>>> 
>>> * Hadoop 2.10.z - last release 8 months ago
>>> * Hadoop 3.1.z - last release 9.5 months ago
>>> * Hadoop 3.2.z - last release 4.5 months ago
>>> * Hadoop 3.3.z - last release 10 months ago
>>> 
>>> And then trunk holds 3.4 which hasn’t had a release since the branch-3.3
>> fork ~14 months ago.
>>> 
>>> I can see that Wei-Chiu has been actively working on getting the 3.3.1
>> release out[4] (thanks Wei-Chiu!) but I do not see anything similar for the
>> other release lines.
>>> 
>>> We also have pages on the wiki for our project roadmap of release[5],
>> but it seems out of date since it lists in progress releases that have
>> happened or branches we have announced as end of life, i.e. 2.8.
>>> 
>>> We also have a group of pages (sorry, I’m not sure what the confluence
>> jargon is for this) for “hadoop active release lines”[6] but this list has
>> 2.8, 2.9, 3.0, 3.1, and 3.3. So several declared end of life lines and no
>> 2.10 or 3.2 despite those being our release lines with the most recent
>> releases.
>>> 
>>> Are there folks willing to go through being rele

[jira] [Created] (HADOOP-16992) Update download links

2020-04-16 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HADOOP-16992:
--

 Summary: Update download links
 Key: HADOOP-16992
 URL: https://issues.apache.org/jira/browse/HADOOP-16992
 Project: Hadoop Common
  Issue Type: Improvement
  Components: website
Reporter: Arpit Agarwal


The download lists for signatures/checksums/KEYS should be updated from 
dist.apache.org to https://downloads.apache.org/hadoop/ozone/.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Hadoop 3.1.2 missing from download site?

2020-04-16 Thread Arpit Agarwal
The Hadoop 3.1.2 download links seem to be broken on 
https://hadoop.apache.org/releases.html

Was it removed on purpose?



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-21 Thread Arpit Agarwal
+1 binding.

- Verified hashes and signatures
- Built from source
- Deployed to 5 node cluster
- Tried ozone shell and filesystem operations
- Ran freon stress test for a while with write validation


I couldn’t find the RC2 tag in the gitbox repo, although it is there in GitHub.

Thanks,
Arpit



> On Mar 21, 2020, at 3:57 PM, Hanisha Koneru  
> wrote:
> 
> Thank you Dinesh for putting up the RCs.
> 
> +1 binding.
> 
> Verified the following:
>  - Built from source
>  - Deployed to 5 node docker cluster and ran sanity tests.
>  - Ran smoke tests
> 
> Thanks
> Hanisha
> 
>> On Mar 15, 2020, at 7:27 PM, Dinesh Chitlangia  wrote:
>> 
>> Hi Folks,
>> 
>> We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta.
>> 
>> The RC artifacts are at:
>> https://home.apache.org/~dineshc/ozone-0.5.0-rc2/
>> 
>> The public key used for signing the artifacts can be found at:
>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>> 
>> The maven artifacts are staged at:
>> https://repository.apache.org/content/repositories/orgapachehadoop-1262
>> 
>> The RC tag in git is at:
>> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2
>> 
>> This release contains 800+ fixes/improvements [1].
>> Thanks to everyone who put in the effort to make this happen.
>> 
>> *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.*
>> 
>> Note: This release is beta quality, it’s not recommended to use in
>> production but we believe that it’s stable enough to try out the feature
>> set and collect feedback.
>> 
>> 
>> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
>> 
>> Thanks,
>> Dinesh Chitlangia
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Arpit Agarwal
Thanks for the clarification Brahma. Can you update the proposal to state that 
it is optional (it may help to put the proposal on cwiki)?

Also if we go ahead then the RM documentation should be clear this is an 
optional step.


> On Mar 17, 2020, at 11:06 AM, Brahma Reddy Battula  wrote:
> 
> Sure, we can't make mandatory while voting and we can upload to downloads
> once release vote is passed.
> 
> On Tue, 17 Mar 2020 at 11:24 PM, Arpit Agarwal
>  wrote:
> 
>>> Sorry,didn't get you...do you mean, once release voting is
>>> processed and upload by RM..?
>> 
>> Yes, that is what I meant. I don’t want us to make more mandatory work for
>> the release manager because the job is hard enough already.
>> 
>> 
>>> On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula 
>> wrote:
>>> 
>>> Sorry,didn't get you...do you mean, once release voting is processed and
>>> upload by RM..?
>>> 
>>> FYI. There is docker image for ARM also which support all scripts
>>> (createrelease, start-build-env.sh, etc ).
>>> 
>>> https://issues.apache.org/jira/browse/HADOOP-16797
>>> 
>>> On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
>>>  wrote:
>>> 
>>>> Can ARM binaries be provided after the fact? We cannot increase the RM’s
>>>> burden by asking them to generate an extra set of binaries.
>>>> 
>>>> 
>>>>> On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula 
>>>> wrote:
>>>>> 
>>>>> + Dev mailing list.
>>>>> 
>>>>> -- Forwarded message -
>>>>> From: Brahma Reddy Battula 
>>>>> Date: Tue, Mar 17, 2020 at 10:31 PM
>>>>> Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
>>>>> To: junping_du 
>>>>> 
>>>>> 
>>>>> thanks junping for your reply.
>>>>> 
>>>>> bq.  I think most of us in Hadoop community doesn't want to have
>>>> biased
>>>>> on ARM or any other platforms.
>>>>> 
>>>>> Yes, release voting will be based on the source code.AFAIK,Binary we
>> are
>>>>> providing for user to easy to download and verify.
>>>>> 
>>>>> bq. The only thing I try to understand is how much complexity get
>>>>> involved for our RM work. Does that potentially become a blocker for
>>>> future
>>>>> releases? And how we can get rid of this risk.
>>>>> 
>>>>> As I mentioned earlier, RM need to access the ARM machine(it will be
>>>>> donated and current qbt also using one ARM machine) and build tar using
>>>> the
>>>>> keys. As it can be common machine, RM can delete his keys once release
>>>>> approved.
>>>>> Can be sorted out as I mentioned earlier.(For accessing the ARM
>> machine)
>>>>> 
>>>>> bq.   If you can list the concrete work that RM need to do extra
>> for
>>>>> ARM release, that would help us to better understand.
>>>>> 
>>>>> I can write and update for future reference.
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
>>>>> 
>>>>>> Hi Brahma,
>>>>>>   I think most of us in Hadoop community doesn't want to have biased
>>>> on
>>>>>> ARM or any other platforms.
>>>>>>   The only thing I try to understand is how much complexity get
>>>>>> involved for our RM work. Does that potentially become a blocker for
>>>> future
>>>>>> releases? And how we can get rid of this risk.
>>>>>>If you can list the concrete work that RM need to do extra for ARM
>>>>>> release, that would help us to better understand.
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>> Junping
>>>>>> 
>>>>>> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
>>>>>> 
>>>>>>> If you can provide ARM release for future releases, I'm fine with
>> that.
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Akira
>>>>>>> 
>>>>>>> 

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Arpit Agarwal
> Sorry,didn't get you...do you mean, once release voting is
> processed and upload by RM..?

Yes, that is what I meant. I don’t want us to make more mandatory work for the 
release manager because the job is hard enough already.


> On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula  wrote:
> 
> Sorry,didn't get you...do you mean, once release voting is processed and
> upload by RM..?
> 
> FYI. There is docker image for ARM also which support all scripts
> (createrelease, start-build-env.sh, etc ).
> 
> https://issues.apache.org/jira/browse/HADOOP-16797
> 
> On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
>  wrote:
> 
>> Can ARM binaries be provided after the fact? We cannot increase the RM’s
>> burden by asking them to generate an extra set of binaries.
>> 
>> 
>>> On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula 
>> wrote:
>>> 
>>> + Dev mailing list.
>>> 
>>> -- Forwarded message -
>>> From: Brahma Reddy Battula 
>>> Date: Tue, Mar 17, 2020 at 10:31 PM
>>> Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
>>> To: junping_du 
>>> 
>>> 
>>> thanks junping for your reply.
>>> 
>>> bq.  I think most of us in Hadoop community doesn't want to have
>> biased
>>> on ARM or any other platforms.
>>> 
>>> Yes, release voting will be based on the source code.AFAIK,Binary we are
>>> providing for user to easy to download and verify.
>>> 
>>> bq. The only thing I try to understand is how much complexity get
>>> involved for our RM work. Does that potentially become a blocker for
>> future
>>> releases? And how we can get rid of this risk.
>>> 
>>> As I mentioned earlier, RM need to access the ARM machine(it will be
>>> donated and current qbt also using one ARM machine) and build tar using
>> the
>>> keys. As it can be common machine, RM can delete his keys once release
>>> approved.
>>> Can be sorted out as I mentioned earlier.(For accessing the ARM machine)
>>> 
>>> bq.   If you can list the concrete work that RM need to do extra for
>>> ARM release, that would help us to better understand.
>>> 
>>> I can write and update for future reference.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
>>> 
>>>> Hi Brahma,
>>>>I think most of us in Hadoop community doesn't want to have biased
>> on
>>>> ARM or any other platforms.
>>>>The only thing I try to understand is how much complexity get
>>>> involved for our RM work. Does that potentially become a blocker for
>> future
>>>> releases? And how we can get rid of this risk.
>>>> If you can list the concrete work that RM need to do extra for ARM
>>>> release, that would help us to better understand.
>>>> 
>>>> Thanks,
>>>> 
>>>> Junping
>>>> 
>>>> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
>>>> 
>>>>> If you can provide ARM release for future releases, I'm fine with that.
>>>>> 
>>>>> Thanks,
>>>>> Akira
>>>>> 
>>>>> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula <
>> bra...@apache.org>
>>>>> wrote:
>>>>> 
>>>>>> thanks Akira.
>>>>>> 
>>>>>> Currently only problem is dedicated ARM for future RM.This i want to
>>>>> sort
>>>>>> out like below,if you've some other,please let me know.
>>>>>> 
>>>>>> i) Single machine and share cred to future RM ( as we can delete keys
>>>>> once
>>>>>> release is over).
>>>>>> ii) Creating the jenkins project ( may be we need to discuss in the
>>>>>> board..)
>>>>>> iii) I can provide ARM release for future releases.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
>>>>> wrote:
>>>>>> 
>>>>>>> Hi Brahma,
>>>>>>> 
>>>>>>> I think we cannot do any of your proposed actions.
>>>>>>> 
>>>

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Arpit Agarwal
Can ARM binaries be provided after the fact? We cannot increase the RM’s burden 
by asking them to generate an extra set of binaries.


> On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula  wrote:
> 
> + Dev mailing list.
> 
> -- Forwarded message -
> From: Brahma Reddy Battula 
> Date: Tue, Mar 17, 2020 at 10:31 PM
> Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
> To: junping_du 
> 
> 
> thanks junping for your reply.
> 
> bq.  I think most of us in Hadoop community doesn't want to have biased
> on ARM or any other platforms.
> 
> Yes, release voting will be based on the source code.AFAIK,Binary we are
> providing for user to easy to download and verify.
> 
> bq. The only thing I try to understand is how much complexity get
> involved for our RM work. Does that potentially become a blocker for future
> releases? And how we can get rid of this risk.
> 
> As I mentioned earlier, RM need to access the ARM machine(it will be
> donated and current qbt also using one ARM machine) and build tar using the
> keys. As it can be common machine, RM can delete his keys once release
> approved.
> Can be sorted out as I mentioned earlier.(For accessing the ARM machine)
> 
> bq.   If you can list the concrete work that RM need to do extra for
> ARM release, that would help us to better understand.
> 
> I can write and update for future reference.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
> 
>> Hi Brahma,
>> I think most of us in Hadoop community doesn't want to have biased on
>> ARM or any other platforms.
>> The only thing I try to understand is how much complexity get
>> involved for our RM work. Does that potentially become a blocker for future
>> releases? And how we can get rid of this risk.
>>  If you can list the concrete work that RM need to do extra for ARM
>> release, that would help us to better understand.
>> 
>> Thanks,
>> 
>> Junping
>> 
>> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
>> 
>>> If you can provide ARM release for future releases, I'm fine with that.
>>> 
>>> Thanks,
>>> Akira
>>> 
>>> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula 
>>> wrote:
>>> 
 thanks Akira.
 
 Currently only problem is dedicated ARM for future RM.This i want to
>>> sort
 out like below,if you've some other,please let me know.
 
 i) Single machine and share cred to future RM ( as we can delete keys
>>> once
 release is over).
 ii) Creating the jenkins project ( may be we need to discuss in the
 board..)
 iii) I can provide ARM release for future releases.
 
 
 
 
 
 
 
 On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
>>> wrote:
 
> Hi Brahma,
> 
> I think we cannot do any of your proposed actions.
> 
> 
 
>>> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
>> Strictly speaking, releases must be verified on hardware owned and
> controlled by the committer. That means hardware the committer has
 physical
> possession and control of and exclusively full
>>> administrative/superuser
> access to. That's because only such hardware is qualified to hold a
>>> PGP
> private key, and the release should be verified on the machine the
 private
> key lives on or on a machine as trusted as that.
> 
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
>> Private keys MUST NOT be stored on any ASF machine. Likewise,
 signatures
> for releases MUST NOT be created on ASF machines.
> 
> We need to have dedicated physical ARM machines for each release
>>> manager,
> and now it is not feasible.
> If you provide an unofficial ARM binary release in some repository,
 that's
> okay.
> 
> -Akira
> 
> On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <
>>> bra...@apache.org>
> wrote:
> 
>> Hello folks,
>> 
>> As currently trunk will support ARM based compilation and qbt(1) is
>> running
>> from several months with quite stable, hence planning to propose ARM
>> binary
>> this time.
>> 
>> ( Note : As we'll know voting will be based on the source,so this
>>> will
 not
>> issue.)
>> 
>> *Proposed Change:*
>> Currently in downloads we are keeping only x86 binary(2),Can we keep
>>> ARM
>> binary also.?
>> 
>> *Actions:*
>> a) *Dedicated* *Machine*:
>>   i) Dedicated ARM machine will be donated which I confirmed
>>   ii) Or can use jenkins ARM machine itself which is currently
>>> used
>> for ARM
>> b) *Automate Release:* How about having one release project in
 jenkins..?
>> So that future RM's just trigger the jenkin project.
>> 
>> Please let me know your thoughts on this.
>> 
>> 
>> 1.
>> 
>> 
 
>>> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/
>> 2.http

Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC1

2020-03-13 Thread Arpit Agarwal
HDDS-3116 is now fixed in the ozone-0.5.0 branch.

Folks - any more potential blockers before Dinesh spins RC2? I don’t see 
anything in Jira at the moment:

https://issues.apache.org/jira/issues/?jql=project%20in%20(%22HDDS%22)%20and%20%22Target%20Version%2Fs%22%20in%20(0.5.0)%20and%20resolution%20in%20(Unresolved)%20and%20priority%20in%20(Blocker)
 
<https://issues.apache.org/jira/issues/?jql=project%20in%20(%22HDDS%22)%20and%20%22Target%20Version/s%22%20in%20(0.5.0)%20and%20resolution%20in%20(Unresolved)%20and%20priority%20in%20(Blocker)>

Thanks,
Arpit


> On Mar 9, 2020, at 6:15 PM, Shashikant Banerjee 
>  wrote:
> 
> I think https://issues.apache.org/jira/browse/HDDS-3116 
> <https://issues.apache.org/jira/browse/HDDS-3116> is a blocker for
> the release. Because of this, datanodes fail to communicate with SCM and
> marked dead and don't seem to recover.
> This has been observed in multiple test setups.
> 
> Thanks
> Shashi
> 
> On Mon, Mar 9, 2020 at 9:20 PM Attila Doroszlai  <mailto:adorosz...@apache.org>>
> wrote:
> 
>> +1
>> 
>> * Verified GPG signature and SHA512 checksum
>> * Compiled sources
>> * Ran ozone smoke test against both binary and locally compiled versions
>> 
>> Thanks Dinesh for RC1.
>> 
>> -Attila
>> 
>> On Sun, Mar 8, 2020 at 2:34 AM Arpit Agarwal
>>  wrote:
>>> 
>>> +1 (binding)
>>> Verified mds, sha512
>>> Verified signatures
>>> Built from source
>>> Deployed to 3 node cluster
>>> Tried a few ozone shell and filesystem commands
>>> Ran freon load generator
>>> Thanks Dinesh for putting the RC1 together.
>>> 
>>> 
>>> 
>>>> On Mar 6, 2020, at 4:46 PM, Dinesh Chitlangia 
>> wrote:
>>>> 
>>>> Hi Folks,
>>>> 
>>>> We have put together RC1 for Apache Hadoop Ozone 0.5.0-beta.
>>>> 
>>>> The RC artifacts are at:
>>>> https://home.apache.org/~dineshc/ozone-0.5.0-rc1/
>>>> 
>>>> The public key used for signing the artifacts can be found at:
>>>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>>>> 
>>>> The maven artifacts are staged at:
>>>> 
>> https://repository.apache.org/content/repositories/orgapachehadoop-1260
>>>> 
>>>> The RC tag in git is at:
>>>> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC1
>>>> 
>>>> This release contains 800+ fixes/improvements [1].
>>>> Thanks to everyone who put in the effort to make this happen.
>>>> 
>>>> *The vote will run for 7 days, ending on March 13th 2020 at 11:59 pm
>> PST.*
>>>> 
>>>> Note: This release is beta quality, it’s not recommended to use in
>>>> production but we believe that it’s stable enough to try out the
>> feature
>>>> set and collect feedback.
>>>> 
>>>> 
>>>> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
>>>> 
>>>> Thanks,
>>>> Dinesh Chitlangia
>>> 
>> 
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org 
>> <mailto:hdfs-dev-unsubscr...@hadoop.apache.org>
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org 
>> <mailto:hdfs-dev-h...@hadoop.apache.org>


Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC1

2020-03-07 Thread Arpit Agarwal
+1 (binding)
Verified mds, sha512
Verified signatures
Built from source
Deployed to 3 node cluster
Tried a few ozone shell and filesystem commands
Ran freon load generator 
Thanks Dinesh for putting the RC1 together.



> On Mar 6, 2020, at 4:46 PM, Dinesh Chitlangia  wrote:
> 
> Hi Folks,
> 
> We have put together RC1 for Apache Hadoop Ozone 0.5.0-beta.
> 
> The RC artifacts are at:
> https://home.apache.org/~dineshc/ozone-0.5.0-rc1/
> 
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1260
> 
> The RC tag in git is at:
> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC1
> 
> This release contains 800+ fixes/improvements [1].
> Thanks to everyone who put in the effort to make this happen.
> 
> *The vote will run for 7 days, ending on March 13th 2020 at 11:59 pm PST.*
> 
> Note: This release is beta quality, it’s not recommended to use in
> production but we believe that it’s stable enough to try out the feature
> set and collect feedback.
> 
> 
> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
> 
> Thanks,
> Dinesh Chitlangia



Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC0

2020-02-28 Thread Arpit Agarwal
Hi Dinesh,

Thanks for spinning up this RC! Looks like we still had ~15 issues that were 
tagged as Blockers for 0.5.0 in jira.

I’ve moved out most of them, however the remaining 4 look like must fixes.

https://issues.apache.org/jira/issues/?jql=project%20%3D%20%22HDDS%22%20and%20%22Target%20Version%2Fs%22%20in%20(0.5.0)%20and%20resolution%20%3D%20Unresolved%20and%20priority%20%3D%20Blocker
 


Thanks,
Arpit


> On Feb 27, 2020, at 8:23 PM, Dinesh Chitlangia  wrote:
> 
> Hi Folks,
> 
> We have put together RC0 for Apache Hadoop Ozone 0.5.0-beta.
> 
> The RC artifacts are at:
> https://home.apache.org/~dineshc/ozone-0.5.0-rc0/
> 
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1259
> 
> The RC tag in git is at:
> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC0
> 
> This release contains 800+ fixes/improvements [1].
> Thanks to everyone who put in the effort to make this happen.
> 
> *The vote will run for 7 days, ending on March 4th 2020 at 11:59 pm PST.*
> 
> Note: This release is beta quality, it’s not recommended to use in
> production but we believe that it’s stable enough to try out the feature
> set and collect feedback.
> 
> 
> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
> 
> Thanks,
> Dinesh Chitlangia



Re: [DISCUSS] Feature branch for HDFS-14978 In-place Erasure Coding Conversion

2020-01-23 Thread Arpit Agarwal
+1


> On Jan 23, 2020, at 2:51 PM, Jitendra Pandey  wrote:
> 
> +1 for the feature branch. 
> 
> On Thu, Jan 23, 2020 at 1:34 PM Wei-Chiu Chuang 
>  wrote:
> Hi we are working on a feature to improve Erasure Coding, and I would like
> to seek your opinion on creating a feature branch for it. (HDFS-14978
>  >)
> 
> Reason for a feature branch
> (1) it turns out we need to update NameNode layout version
> (2) It's a medium size project and we want to get this feature merged in
> its entirety.
> 
> Aravindan Vijayan and I are planning to work on this feature.
> 
> Thoughts?



Re: Needs support to add more entropy to improve cryptographic randomness on Linux VMs

2020-01-14 Thread Arpit Agarwal
Thanks for flagging this Ahmed.

urandom is a perfectly acceptable option. I am not sure who maintains the 
pre-commit machines these days.



> On Jan 14, 2020, at 10:42 AM, Ahmed Hussein  wrote:
> 
> Hi,
> 
> I was investigating a JUnit test
> (MAPREDUCE-7079:TestMRIntermediateDataEncryption
> is failing in precommit builds
> ) that was
> consistently hanging on Linux VMs and failing Mapreduce pre-builds.
> I found that the test hangs slows or hangs indefinitely whenever Java reads
> the random file.
> 
> I explored two different ways to get that test case to work properly on my
> local Linux VM running rel7:
> 
>   1. To install "haveged" and "rng-tools" on the virtual machine running
>   Rel7. Then, start rngd service {{sudo service rngd start}} .
>   2. Change java configuration to load urandom
>   {{sudo vim $JAVA_HOME/jre/lib/security/java.security}}
>   ## Change the line “securerandom.source=file:/dev/random” to read:
>   securerandom.source=file:/dev/./urandom
> 
> 
> Is it possible to apply any of the above solutions to the VM that runs the
> precommit builds?


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Ozone 0.4.2 release

2019-12-07 Thread Arpit Agarwal
+1



> On Dec 6, 2019, at 5:25 PM, Dinesh Chitlangia  wrote:
> 
> All,
> Since the Apache Hadoop Ozone 0.4.1 release, we have had significant
> bug fixes towards performance & stability.
> 
> With that in mind, 0.4.2 release would be good to consolidate all those fixes.
> 
> Pls share your thoughts.
> 
> 
> Thanks,
> Dinesh Chitlangia


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16688) Update Hadoop website to mention Ozone mailing lists

2019-11-06 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HADOOP-16688:
--

 Summary: Update Hadoop website to mention Ozone mailing lists
 Key: HADOOP-16688
 URL: https://issues.apache.org/jira/browse/HADOOP-16688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: website
Reporter: Arpit Agarwal


Now that Ozone has its separate mailing lists, let's list them on the Hadoop 
website.

https://hadoop.apache.org/mailing_lists.html

Thanks to [~ayushtkn] for suggesting this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] create ozone-dev and ozone-issues mailing lists

2019-11-01 Thread Arpit Agarwal
Thanks for kicking this off Marton. Submitted INFRA requests to create the 
following. The lists should be live soon.

- ozone-dev@h.a.o
- ozone-issues@h.a.o
- ozone-commits@h.a.o



> On Oct 31, 2019, at 3:32 AM, Elek, Marton  wrote:
> 
> 
> Thanks for all the votes and feedback.
> 
> The vote is passed with no -1 and with many +1
> 
> The mailing lists will be created soon and the notification settings will be 
> updated.
> 
> Thank you for your patience.
> Marton
> 
> 
> On 10/27/19 9:25 AM, Elek, Marton wrote:
>> As discussed earlier in the thread of "Hadoop-Ozone repository mailing list 
>> configurations" [1] I suggested to solve the current misconfiguration 
>> problem with creating separated mailing lists (dev/issues) for Hadoop Ozone.
>> It would have some additional benefit: for example it would make easier to 
>> follow the Ozone development and future plans.
>> Here I am starting a new vote thread (open for at least 72 hours) to collect 
>> more feedback about this.
>> Please express your opinion / vote.
>> Thanks a lot,
>> Marton
>> [1] 
>> https://lists.apache.org/thread.html/dc66a30f48a744534e748c418bf7ab6275896166ca5ade11560ebaef@%3Chdfs-dev.hadoop.apache.org%3E
>>  -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] create ozone-dev and ozone-issues mailing lists

2019-10-30 Thread Arpit Agarwal
+1

> On Oct 27, 2019, at 1:25 AM, Elek, Marton  wrote:
> 
> 
> As discussed earlier in the thread of "Hadoop-Ozone repository mailing list 
> configurations" [1] I suggested to solve the current misconfiguration problem 
> with creating separated mailing lists (dev/issues) for Hadoop Ozone.
> 
> It would have some additional benefit: for example it would make easier to 
> follow the Ozone development and future plans.
> 
> Here I am starting a new vote thread (open for at least 72 hours) to collect 
> more feedback about this.
> 
> Please express your opinion / vote.
> 
> Thanks a lot,
> Marton
> 
> [1] 
> https://lists.apache.org/thread.html/dc66a30f48a744534e748c418bf7ab6275896166ca5ade11560ebaef@%3Chdfs-dev.hadoop.apache.org%3E
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-18 Thread Arpit Agarwal
+1


> On Sep 17, 2019, at 2:49 AM, Elek, Marton  wrote:
> 
> 
> 
> TLDR; I propose to move Ozone related code out from Hadoop trunk and store it 
> in a separated *Hadoop* git repository apache/hadoop-ozone.git
> 
> 
> 
> 
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to be 
> part of the source tree but with separated release cadence, mainly because it 
> had the hadoop-trunk/SNAPSHOT as compile time dependency.
> 
> During the last Ozone releases this dependency is removed to provide more 
> stable releases. Instead of using the latest trunk/SNAPSHOT build from 
> Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
> 
> As we have no more strict dependency between Hadoop trunk SNAPSHOT and Ozone 
> trunk I propose to separate the two code base from each other with creating a 
> new Hadoop git repository (apache/hadoop-ozone.git):
> 
> With moving Ozone to a separated git repository:
> 
> * It would be easier to contribute and understand the build (as of now we 
> always need `-f pom.ozone.xml` as a Maven parameter)
> * It would be possible to adjust build process without breaking Hadoop/Ozone 
> builds.
> * It would be possible to use different Readme/.asf.yaml/github template for 
> the Hadoop Ozone and core Hadoop. (For example the current github template 
> [2] has a link to the contribution guideline [3]. Ozone has an extended 
> version [4] from this guideline with additional information.)
> * Testing would be more safe as it won't be possible to change core Hadoop 
> and Hadoop Ozone in the same patch.
> * It would be easier to cut branches for Hadoop releases (based on the 
> original consensus, Ozone should be removed from all the release branches 
> after creating relase branches from trunk)
> 
> 
> What do you think?
> 
> Thanks,
> Marton
> 
> [1]: 
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> [2]: 
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> [3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> [4]: 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Move Submarine source code, documentation, etc. to a separate Apache Git repo

2019-08-29 Thread Arpit Agarwal
+1

> On Aug 23, 2019, at 7:05 PM, Wangda Tan  wrote:
> 
> Hi devs,
> 
> This is a voting thread to move Submarine source code, documentation from
> Hadoop repo to a separate Apache Git repo. Which is based on discussions of
> https://lists.apache.org/thread.html/e49d60b2e0e021206e22bb2d430f4310019a8b29ee5020f3eea3bd95@%3Cyarn-dev.hadoop.apache.org%3E
> 
> Contributors who have permissions to push to Hadoop Git repository will
> have permissions to push to the new Submarine repository.
> 
> This voting thread will run for 7 days and will end at Aug 30th.
> 
> Please let me know if you have any questions.
> 
> Thanks,
> Wangda Tan


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Merge HDFS-13891(RBF) to trunk

2019-06-10 Thread Arpit Agarwal
Thanks for the explanation Brahma and Iñigo!

+0 from me (haven’t looked at it closely enough to give a +1).

Regards,
Arpit


> On Jun 10, 2019, at 10:12 AM, Brahma Reddy Battula  wrote:
> 
> Dear Arpit,
> 
> Thanks for taking look into it.
> 
> ECBlockGroupStats.merge() is Utility method which is moved to 
> hadoop-hdfs-client module. Ideally it could have been seperate jira. But this 
> changes will not induce any issues, will take necessary action for this.
> 
> On Mon, Jun 10, 2019 at 8:40 PM Arpit Agarwal  <mailto:aagar...@cloudera.com>> wrote:
> I scanned the merge payload for changes to non-RBF code. The changes are 
> minimal, which is good.
> 
> The only commit that I didn’t understand was:
> https://issues.apache.org/jira/browse/HDFS-14268 
> <https://issues.apache.org/jira/browse/HDFS-14268>
> 
> The jira description doesn’t make it clear why ECBlockGroupStats is modified.
> 
> +0 apart from that.
> 
> 
>> On Jun 1, 2019, at 8:40 PM, Brahma Reddy Battula > <mailto:bra...@apache.org>> wrote:
>> 
>> Dear Hadoop Developers
>> 
>> I would like to propose RBF Branch (HDFS-13891) merge into trunk. We have
>> been working on this feature from last several months.
>> This feature work received the contributions from different companies. All
>> of the feature development happened smoothly and collaboratively in JIRAs.
>> 
>> Kindly do take a look at the branch and raise issues/concerns that need to
>> be addressed before the merge.
>> 
>> *Highlights of HDFS-13891 Branch:*
>> =
>> 
>> Adding Security to RBF(1)
>> Adding Missing Client API's(2)
>> Improvements/Bug Fixing
>>  Critical - HDFS-13637, HDFS-13834
>> 
>> *Commits:*
>> 
>> 
>> No of JIRAs Resolved: 72
>> 
>> All this commits are in RBF Module. No changes in hdfs/common.
>> 
>> *Tested Cluster:*
>> =
>> 
>> Most of these changes verified at Uber,Microsoft,Huawei and some other
>> compaines.
>> 
>> *Uber*: Most changes are running in production @Uber including the critical
>> security changes, HDFS Clusters are 4000+ nodes with 8 HDFS Routers.
>> Zookeeper as a state store to hold delegation tokens were also stress
>> tested to hold more than 2 Million tokens. --CR Hota
>> 
>> *MicroSoft*: Most of these changes are currently running in production at
>> Microsoft.The security has also been tested in a 500 server cluster with 4
>> subclsuters. --Inigo Goiri
>> 
>> *Huawei* : Deployed all this changes in 20 node cluster with 3
>> routers.Planning deploy 10K production cluster.
>> 
>> *Contributors:*
>> ===
>> 
>> Many thanks to Akira Ajisaka,Mohammad Arshad,Takanobu Asanuma,Shubham
>> Dewan,CR Hota,Fei Hui,Inigo Goiri,Dibyendu Karmakar,Fengna Li,Gang
>> Li,Surendra Singh Lihore,Ranith Sardar,Ayush Saxena,He Xiaoqiao,Sherwood
>> Zheng,Daryn Sharp,VinayaKumar B,Anu Engineer for invloving discussions and
>> contributing to this.
>> 
>> *Future Tasks:*
>> 
>> 
>> will cleanup the jira's under this umbrella and contiue to work.
>> 
>> Reference:
>> 1) https://issues.apache.org/jira/browse/HDFS-13532 
>> <https://issues.apache.org/jira/browse/HDFS-13532>
>> 2) https://issues.apache.org/jira/browse/HDFS-13655 
>> <https://issues.apache.org/jira/browse/HDFS-13655>
>> 
>> 
>> 
>> 
>> --Brahma Reddy Battula
> 
> 
> 
> -- 
> 
> 
> 
> --Brahma Reddy Battula



Re: [DISCUSS] Merge HDFS-13891(RBF) to trunk

2019-06-10 Thread Arpit Agarwal
I scanned the merge payload for changes to non-RBF code. The changes are 
minimal, which is good.

The only commit that I didn’t understand was:
https://issues.apache.org/jira/browse/HDFS-14268 


The jira description doesn’t make it clear why ECBlockGroupStats is modified.

+0 apart from that.


> On Jun 1, 2019, at 8:40 PM, Brahma Reddy Battula  wrote:
> 
> Dear Hadoop Developers
> 
> I would like to propose RBF Branch (HDFS-13891) merge into trunk. We have
> been working on this feature from last several months.
> This feature work received the contributions from different companies. All
> of the feature development happened smoothly and collaboratively in JIRAs.
> 
> Kindly do take a look at the branch and raise issues/concerns that need to
> be addressed before the merge.
> 
> *Highlights of HDFS-13891 Branch:*
> =
> 
> Adding Security to RBF(1)
> Adding Missing Client API's(2)
> Improvements/Bug Fixing
>  Critical - HDFS-13637, HDFS-13834
> 
> *Commits:*
> 
> 
> No of JIRAs Resolved: 72
> 
> All this commits are in RBF Module. No changes in hdfs/common.
> 
> *Tested Cluster:*
> =
> 
> Most of these changes verified at Uber,Microsoft,Huawei and some other
> compaines.
> 
> *Uber*: Most changes are running in production @Uber including the critical
> security changes, HDFS Clusters are 4000+ nodes with 8 HDFS Routers.
> Zookeeper as a state store to hold delegation tokens were also stress
> tested to hold more than 2 Million tokens. --CR Hota
> 
> *MicroSoft*: Most of these changes are currently running in production at
> Microsoft.The security has also been tested in a 500 server cluster with 4
> subclsuters. --Inigo Goiri
> 
> *Huawei* : Deployed all this changes in 20 node cluster with 3
> routers.Planning deploy 10K production cluster.
> 
> *Contributors:*
> ===
> 
> Many thanks to Akira Ajisaka,Mohammad Arshad,Takanobu Asanuma,Shubham
> Dewan,CR Hota,Fei Hui,Inigo Goiri,Dibyendu Karmakar,Fengna Li,Gang
> Li,Surendra Singh Lihore,Ranith Sardar,Ayush Saxena,He Xiaoqiao,Sherwood
> Zheng,Daryn Sharp,VinayaKumar B,Anu Engineer for invloving discussions and
> contributing to this.
> 
> *Future Tasks:*
> 
> 
> will cleanup the jira's under this umbrella and contiue to work.
> 
> Reference:
> 1) https://issues.apache.org/jira/browse/HDFS-13532
> 2) https://issues.apache.org/jira/browse/HDFS-13655
> 
> 
> 
> 
> --Brahma Reddy Battula



Re: [VOTE] Unprotect HDFS-13891 (HDFS RBF Branch)

2019-05-14 Thread Arpit Agarwal
The request is specific to HDFS-13891, correct?

We should not allow force push on trunk.


> On May 14, 2019, at 8:07 AM, Anu Engineer  
> wrote:
> 
> Is it possible to unprotect the branches and not the trunk? Generally, a
> force push to trunk indicates a mistake and we have had that in the past.
> This is just a suggestion,  even if this request is not met, I am still +1.
> 
> Thanks
> Anu
> 
> 
> 
> On Tue, May 14, 2019 at 4:58 AM Takanobu Asanuma 
> wrote:
> 
>> +1.
>> 
>> Thanks!
>> - Takanobu
>> 
>> 
>> From: Akira Ajisaka 
>> Sent: Tuesday, May 14, 2019 4:26:30 PM
>> To: Giovanni Matteo Fumarola
>> Cc: Iñigo Goiri; Brahma Reddy Battula; Hadoop Common; Hdfs-dev
>> Subject: Re: [VOTE] Unprotect HDFS-13891 (HDFS RBF Branch)
>> 
>> +1 to unprotect the branch.
>> 
>> Thanks,
>> Akira
>> 
>> On Tue, May 14, 2019 at 3:11 PM Giovanni Matteo Fumarola
>>  wrote:
>>> 
>>> +1 to unprotect the branches for rebases.
>>> 
>>> On Mon, May 13, 2019 at 11:01 PM Iñigo Goiri  wrote:
>>> 
 Syncing the branch to trunk should be a fairly standard task.
 Is there a way to do this without rebasing and forcing the push?
 As far as I know this has been the standard for other branches and I
>> don't
 know of any alternative.
 We should clarify the process as having to get PMC consensus to rebase
>> a
 branch seems a little overkill to me.
 
 +1 from my side to un protect the branch to do the rebase.
 
 On Mon, May 13, 2019, 22:46 Brahma Reddy Battula 
 wrote:
 
> Hi Folks,
> 
> INFRA-18181 made all the Hadoop branches are protected.
> Badly HDFS-13891 branch needs to rebased as we contribute core
>> patches
> trunk..So,currently we are stuck with rebase as it’s not allowed to
>> force
> push.Hence raised INFRA-18361.
> 
> Can we have a quick vote for INFRA sign-off to proceed as this is
 blocking
> all branch commits??
> 
> --
> 
> 
> 
> --Brahma Reddy Battula
> 
 
>> 
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: VOTE: Hadoop Ozone 0.4.0-alpha RC2

2019-05-05 Thread Arpit Agarwal
Thanks for building this RC Ajay.

+1 binding.

- verified signatures and checksums
- built from source 
- deployed to 3 node cluster, tried out basic operations
- ran smoke tests
- ran unit tests
- LICENSE/NOTICE files look ok

There is an extra file in the source root named JenkinsFile.



> On Apr 29, 2019, at 9:04 PM, Ajay Kumar  wrote:
> 
> Hi All,
> 
> 
> 
> We have created the third release candidate (RC2) for Apache Hadoop Ozone 
> 0.4.0-alpha.
> 
> 
> 
> This release contains security payload for Ozone. Below are some important 
> features in it:
> 
> 
> 
>  *   Hadoop Delegation Tokens and Block Tokens supported for Ozone.
>  *   Transparent Data Encryption (TDE) Support - Allows data blocks to be 
> encrypted-at-rest.
>  *   Kerberos support for Ozone.
>  *   Certificate Infrastructure for Ozone  - Tokens use PKI instead of shared 
> secrets.
>  *   Datanode to Datanode communication secured via mutual TLS.
>  *   Ability secure ozone cluster that works with Yarn, Hive, and Spark.
>  *   Skaffold support to deploy Ozone clusters on K8s.
>  *   Support S3 Authentication Mechanisms like - S3 v4 Authentication 
> protocol.
>  *   S3 Gateway supports Multipart upload.
>  *   S3A file system is tested and supported.
>  *   Support for Tracing and Profiling for all Ozone components.
>  *   Audit Support - including Audit Parser tools.
>  *   Apache Ranger Support in Ozone.
>  *   Extensive failure testing for Ozone.
> 
> The RC artifacts are available at 
> https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc2/
> 
> 
> 
> The RC tag in git is ozone-0.4.0-alpha-RC2 (git hash 
> 4ea602c1ee7b5e1a5560c6cbd096de4b140f776b)
> 
> 
> 
> Please try 
> out,
>  vote, or just give us feedback.
> 
> 
> 
> The vote will run for 5 days, ending on May 4, 2019, 04:00 UTC.
> 
> 
> 
> Thank you very much,
> 
> Ajay


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: VOTE: Hadoop Ozone 0.4.0-alpha RC1

2019-04-22 Thread Arpit Agarwal
Thanks Ajay for putting together this RC.

Unfortunately HDDS-1425  looks 
like a blocker. We should make the docker experience smooth for anyone trying 
out 0.4.0.

I’ve just committed Marton’s patch for HDDS-1425 this morning. Let’s roll a new 
RC.



> On Apr 15, 2019, at 4:09 PM, Ajay Kumar  wrote:
> 
> Hi all,
> 
> We have created the second release candidate (RC1) for Apache Hadoop Ozone 
> 0.4.0-alpha.
> 
> This release contains security payload for Ozone. Below are some important 
> features in it:
> 
>  *   Hadoop Delegation Tokens and Block Tokens supported for Ozone.
>  *   Transparent Data Encryption (TDE) Support - Allows data blocks to be 
> encrypted-at-rest.
>  *   Kerberos support for Ozone.
>  *   Certificate Infrastructure for Ozone  - Tokens use PKI instead of shared 
> secrets.
>  *   Datanode to Datanode communication secured via mutual TLS.
>  *   Ability secure ozone cluster that works with Yarn, Hive, and Spark.
>  *   Skaffold support to deploy Ozone clusters on K8s.
>  *   Support S3 Authentication Mechanisms like - S3 v4 Authentication 
> protocol.
>  *   S3 Gateway supports Multipart upload.
>  *   S3A file system is tested and supported.
>  *   Support for Tracing and Profiling for all Ozone components.
>  *   Audit Support - including Audit Parser tools.
>  *   Apache Ranger Support in Ozone.
>  *   Extensive failure testing for Ozone.
> 
> The RC artifacts are available at 
> https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc1
> 
> The RC tag in git is ozone-0.4.0-alpha-RC1 (git hash 
> d673e16d14bb9377f27c9017e2ffc1bcb03eebfb)
> 
> Please try 
> out,
>  vote, or just give us feedback.
> 
> The vote will run for 5 days, ending on April 20, 2019, 19:00 UTC.
> 
> Thank you very much,
> 
> Ajay
> 
> 



[jira] [Reopened] (HADOOP-13386) Upgrade Avro to 1.8.x

2019-03-25 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-13386:


Reopening since HADOOP-14992 upgraded Avro to 1.7.7. This jira requested an 
upgrade to 1.8.x.

> Upgrade Avro to 1.8.x
> -
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Priority: Major
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Docker build process

2019-03-19 Thread Arpit Agarwal
Hi Eric,

> Dockerfile is most likely to change to apply the security fix.

I am not sure this is always. Marton’s point about revising docker images 
independent of Hadoop versions is valid. 


> When maven release is automated through Jenkins, this is a breeze
> of clicking a button.  Jenkins even increment the target version
> automatically with option to edit. 

I did not understand this suggestion. Could you please explain in simpler terms 
or share a link to the description?


> I will make adjustment accordingly unless 7 more people comes
> out and say otherwise.

What adjustment is this?

Thanks,
Arpit


> On Mar 19, 2019, at 10:19 AM, Eric Yang  wrote:
> 
> Hi Marton,
> 
> Thank you for your input.  I agree with most of what you said with a few 
> exceptions.  Security fix should result in a different version of the image 
> instead of replace of a certain version.  Dockerfile is most likely to change 
> to apply the security fix.  If it did not change, the source has instability 
> over time, and result in non-buildable code over time.  When maven release is 
> automated through Jenkins, this is a breeze of clicking a button.  Jenkins 
> even increment the target version automatically with option to edit.  It 
> makes release manager's job easier than Homer Simpson's job.
> 
> If versioning is done correctly, older branches can have the same docker 
> subproject, and Hadoop 2.7.8 can be released for older Hadoop branches.  We 
> don't generate timeline paradox to allow changing the history of Hadoop 
> 2.7.1.  That release has passed and let it stay that way.
> 
> There are mounting evidence that Hadoop community wants docker profile for 
> developer image.  Precommit build will not catch some build errors because 
> more codes are allowed to slip through using profile build process.  I will 
> make adjustment accordingly unless 7 more people comes out and say otherwise.
> 
> Regards,
> Eric
> 
> On 3/19/19, 1:18 AM, "Elek, Marton"  wrote:
> 
> 
> 
>Thank you Eric to describe the problem.
> 
>I have multiple small comments, trying to separate them.
> 
>I. separated vs in-build container image creation
> 
>> The disadvantages are:
>> 
>> 1.  Require developer to have access to docker.
>> 2.  Default build takes longer.
> 
> 
>These are not the only disadvantages (IMHO) as I wrote it in in the
>previous thread and the issue [1]
> 
>Using in-build container image creation doesn't enable:
> 
>1. to modify the image later (eg. apply security fixes to the container
>itself or apply improvements for the startup scripts)
>2. create images for older releases (eg. hadoop 2.7.1)
> 
>I think there are two kind of images:
> 
>a) images for released artifacts
>b) developer images
> 
>I would prefer to manage a) with separated branch repositories but b)
>with (optional!) in-build process.
> 
>II. Agree with Steve. I think it's better to make it optional as most of
>the time it's not required. I think it's better to support the default
>dev build with the default settings (=just enough to start)
> 
>III. Maven best practices
> 
>(https://dzone.com/articles/maven-profile-best-practices)
> 
>I think this is a good article. But this is not against profiles but
>creating multiple versions from the same artifact with the same name
>(eg. jdk8/jdk11). In Hadoop, profiles are used to introduce optional
>steps. I think it's fine as the maven lifecycle/phase model is very
>static (compare it with the tree based approach in Gradle).
> 
>Marton
> 
>[1]: https://issues.apache.org/jira/browse/HADOOP-16091
> 
>On 3/13/19 11:24 PM, Eric Yang wrote:
>> Hi Hadoop developers,
>> 
>> In the recent months, there were various discussions on creating docker 
>> build process for Hadoop.  There was convergence to make docker build 
>> process inline in the mailing list last month when Ozone team is planning 
>> new repository for Hadoop/ozone docker images.  New feature has started to 
>> add docker image build process inline in Hadoop build.
>> A few lessons learnt from making docker build inline in YARN-7129.  The 
>> build environment must have docker to have a successful docker build.  
>> BUILD.txt stated for easy build environment use Docker.  There is logic in 
>> place to ensure that absence of docker does not trigger docker build.  The 
>> inline process tries to be as non-disruptive as possible to existing 
>> development environment with one exception.  If docker’s presence is 
>> detected, but user does not have rights to run docker.  This will cause the 
>> build to fail.
>> 
>> Now, some developers are pushing back on inline docker build process because 
>> existing environment did not make docker build process mandatory.  However, 
>> there are benefits to use inline docker build process.  The listed benefits 
>> are:
>> 
>> 1.  Source code tag, maven repository artifacts and docker hub artifacts can 
>> all be pr

Re: [VOTE] Release Apache Hadoop 3.1.2 - RC1

2019-02-05 Thread Arpit Agarwal
+1 binding for updated source package.

  - Rechecked signatures and checksums
  - Source matches release git tag
  - Built from source


> On Feb 5, 2019, at 10:50 AM, Sunil G  wrote:
> 
> Thanks Billie for pointing out.
> I have updated source by removing patchprocess and extra line create
> release.
> 
> Also updated checksum as well.
> 
> @bil...@apache.org   @Wangda Tan 
> please help to verify this changed bit once.
> 
> Thanks
> Sunil
> 
> On Tue, Feb 5, 2019 at 5:23 AM Billie Rinaldi 
> wrote:
> 
>> Hey Sunil and Wangda, thanks for the RC. The source tarball has a
>> patchprocess directory with some yetus code in it. Also, the file
>> dev-support/bin/create-release file has the following line added:
>>  export GPG_AGENT_INFO="/home/sunilg/.gnupg/S.gpg-agent:$(pgrep
>> gpg-agent):1"
>> 
>> I think we are probably due for an overall review of LICENSE and NOTICE. I
>> saw some idiosyncrasies there but nothing that looked like a blocker.
>> 
>> On Mon, Jan 28, 2019 at 10:20 PM Sunil G  wrote:
>> 
>>> Hi Folks,
>>> 
>>> On behalf of Wangda, we have an RC1 for Apache Hadoop 3.1.2.
>>> 
>>> The artifacts are available here:
>>> http://home.apache.org/~sunilg/hadoop-3.1.2-RC1/
>>> 
>>> The RC tag in git is release-3.1.2-RC1:
>>> https://github.com/apache/hadoop/commits/release-3.1.2-RC1
>>> 
>>> The maven artifacts are available via repository.apache.org at
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1215
>>> 
>>> This vote will run 5 days from now.
>>> 
>>> 3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.
>>> 
>>> We have done testing with a pseudo cluster and distributed shell job.
>>> 
>>> My +1 to start.
>>> 
>>> Best,
>>> Wangda Tan and Sunil Govindan
>>> 
>>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
>>> ORDER BY priority DESC
>>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.1.2 - RC1

2019-02-04 Thread Arpit Agarwal
+1 (binding)

- Verified signatures
- Verified checksums
- Built from source
- Verified Maven artifacts on staging repo
- Deployed 3 node cluster
- Tried out HDFS commands, MapReduce jobs.

Confirmed the issues Billie pointed out. Not sure if you need to spin up a new 
RC or you can update the tarball - contents of the git tag look fine.


> On Jan 28, 2019, at 10:19 PM, Sunil G  wrote:
> 
> Hi Folks,
> 
> On behalf of Wangda, we have an RC1 for Apache Hadoop 3.1.2.
> 
> The artifacts are available here:
> http://home.apache.org/~sunilg/hadoop-3.1.2-RC1/
> 
> The RC tag in git is release-3.1.2-RC1:
> https://github.com/apache/hadoop/commits/release-3.1.2-RC1
> 
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1215
> 
> This vote will run 5 days from now.
> 
> 3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.
> 
> We have done testing with a pseudo cluster and distributed shell job.
> 
> My +1 to start.
> 
> Best,
> Wangda Tan and Sunil Govindan
> 
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
> ORDER BY priority DESC


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: proposed new repository for hadoop/ozone docker images (+update on docker works)

2019-01-29 Thread Arpit Agarwal
I’ve requested a new repo hadoop-docker-ozone.git in gitbox.


> On Jan 22, 2019, at 4:59 AM, Elek, Marton  wrote:
> 
> 
> 
> TLDR;
> 
> I proposed to create a separated git repository for ozone docker images
> in HDDS-851 (hadoop-docker-ozone.git)
> 
> If there is no objections in the next 3 days I will ask an Apache Member
> to create the repository.
> 
> 
> 
> 
> LONG VERSION:
> 
> In HADOOP-14898 multiple docker containers and helper scripts are
> created for Hadoop.
> 
> The main goal was to:
> 
> 1.) help the development with easy-to-use docker images
> 2.) provide official hadoop images to make it easy to test new features
> 
> As of now we have:
> 
> - apache/hadoop-runner image (which contains the required dependency
> but no hadoop)
> - apache/hadoop:2 and apache/hadoop:3 images (to try out latest hadoop
> from 2/3 lines)
> 
> The base image to run hadoop (apache/hadoop-runner) is also heavily used
> for Ozone distribution/development.
> 
> The Ozone distribution contains docker-compose based cluster definitions
> to start various type of clusters and scripts to do smoketesting. (See
> HADOOP-16063 for more details).
> 
> Note: I personally believe that these definitions help a lot to start
> different type of clusters. For example it could be tricky to try out
> router based federation as it requires multiple HA clusters. But with a
> simple docker-compose definition [1] it could be started under 3
> minutes. (HADOOP-16063 is about creating these definitions for various
> hdfs/yarn use cases)
> 
> As of now we have dedicated branches in the hadoop git repository for
> the docker images (docker-hadoop-runner, docker-hadoop-2,
> docker-hadoop-3). It turns out that a separated repository would be more
> effective as the dockerhub can use only full branch names as tags.
> 
> We would like to provide ozone docker images to make the evaluation as
> easy as 'docker run -d apache/hadoop-ozone:0.3.0', therefore in HDDS-851
> we agreed to create a separated repository for the hadoop-ozone docker
> images.
> 
> If this approach works well we can also move out the existing
> docker-hadoop-2/docker-hadoop-3/docker-hadoop-runner branches from
> hadoop.git to an other separated hadoop-docker.git repository)
> 
> Please let me know if you have any comments,
> 
> Thanks,
> Marton
> 
> 1: see
> https://github.com/flokkr/runtime-compose/tree/master/hdfs/routerfeder
> as an example
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] - HDDS-4 Branch merge

2019-01-17 Thread Arpit Agarwal
+1

It's great to see Ozone security being merged.


On 2019/01/11, 7:40 AM, "Anu Engineer"  wrote:

Since I have not heard any concerns, I will start a VOTE thread now.
This vote will run for 7 days and will end on Jan/18/2019 @ 8:00 AM PST.

I will start with my vote, +1 (Binding)

Thanks
Anu


-- Forwarded message -
From: Anu Engineer 
Date: Mon, Jan 7, 2019 at 5:10 PM
Subject: [Discuss] - HDDS-4 Branch merge
To: , 


Hi All,

I would like to propose a merge of HDDS-4 branch to the Hadoop trunk.
HDDS-4 branch implements the security work for HDDS and Ozone.

HDDS-4 branch contains the following features:
- Hadoop Kerberos and Tokens support
- A Certificate infrastructure used by Ozone and HDDS.
- Audit Logging and parsing support (Spread across trunk and HDDS-4)
- S3 Security Support - AWS Signature Support.
- Apache Ranger Support for Ozone

I will follow up with a formal vote later this week if I hear no
objections. AFAIK, the changes are isolated to HDDS/Ozone and should not
impact any other Hadoop project.

Thanks
Anu




Re: [VOTE] Release Apache Hadoop Ozone 0.3.0-alpha (RC1)

2018-11-15 Thread Arpit Agarwal
+1 binding.

  - Verified signatures
  - Verified checksums
  - Checked LICENSE/NOTICE files
  - Built from source
  - Deployed to three node cluster and ran smoke tests.

Thanks Marton for putting up the RC.


On 2018/11/14, 9:14 AM, "Elek, Marton"  wrote:

Hi all,

I've created the second release candidate (RC1) for Apache Hadoop Ozone
0.3.0-alpha including one more fix on top of the previous RC0 (HDDS-854)

This is the second release of Apache Hadoop Ozone. Notable changes since
the first release:

* A new S3 compatible rest server is added. Ozone can be used from any
S3 compatible tools (HDDS-434)
* Ozone Hadoop file system URL prefix is renamed from o3:// to o3fs://
(HDDS-651)
* Extensive testing and stability improvements of OzoneFs.
* Spark, YARN and Hive support and stability improvements.
* Improved Pipeline handling and recovery.
* Separated/dedicated classpath definitions for all the Ozone
components. (HDDS-447)

The RC artifacts are available from:
https://home.apache.org/~elek/ozone-0.3.0-alpha-rc1/

The RC tag in git is: ozone-0.3.0-alpha-RC1 (ebbf459e6a6)

Please try it out, vote, or just give us feedback.

The vote will run for 5 days, ending on November 19, 2018 18:00 UTC.


Thank you very much,
Marton


PS:

The easiest way to try it out is:

1. Download the binary artifact
2. Read the docs from ./docs/index.html
3. TLDR; cd compose/ozone && docker-compose up -d
4. open localhost:9874 or localhost:9876



The easiest way to try it out from the source:

1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
-DskipShade -am -pl :hadoop-ozone-dist
2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha && docker-compose up -d



The easiest way to test basic functionality (with acceptance tests):

1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
-DskipShade -am -pl :hadoop-ozone-dist
2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha/smoketest
3. ./test.sh

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org





Re: Jenkins build machines are down

2018-10-31 Thread Arpit Agarwal
Thanks Bharat.

On 2018/10/31, 12:01 PM, "Bharat Viswanadham"  
wrote:

There is already an infra ticket opened for this.
https://issues.apache.org/jira/browse/INFRA-17188

Comment from this Infra ticket:
Please read the builds@ list, they are getting a disk upgrade and will be 
back when complete.

Thank You @Marton Elek for providing this information.

Thanks,
Bharat


On 10/31/18, 11:58 AM, "Arpit Agarwal"  wrote:

A number of the Jenkins build machines appear to be down with different 
error messages.

https://builds.apache.org/label/Hadoop/

Does anyone know what is the process to restore them? I assume INFRA 
cannot help as they don’t own these machines.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org





-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Jenkins build machines are down

2018-10-31 Thread Arpit Agarwal
A number of the Jenkins build machines appear to be down with different error 
messages.

https://builds.apache.org/label/Hadoop/

Does anyone know what is the process to restore them? I assume INFRA cannot 
help as they don’t own these machines.




[jira] [Created] (HADOOP-15867) Allow registering MBeans without additional jmx properties

2018-10-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15867:
--

 Summary: Allow registering MBeans without additional jmx properties
 Key: HADOOP-15867
 URL: https://issues.apache.org/jira/browse/HADOOP-15867
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HDDS and Ozone use the MBeans.register overload added by HADOOP-15339. This is 
missing in Apache Hadoop 3.1.0 and earlier. This prevents us from building 
Ozone with earlier versions of Hadoop. More commonly, we see runtime exceptions 
if an earlier version of Hadoop happens to be in the classpath.

Let's add a reflection-based switch to invoke the right version of the API so 
we can build and use Ozone with Apache Hadoop 3.1.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Ozone 0.2.1-alpha (RC0)

2018-09-26 Thread Arpit Agarwal
Thanks for putting together this release Marton.

+1 (binding)

  - Verified signatures and checksums
  - LICENSE and NOTICE files exist in source and binary tarballs
  - Built from source code
  - Deployed to three node cluster
  - Tried out Ozone shell commands via 'ozone sh'
  - Tried out OzoneFS commands using 'ozone fs'


Hit couple of issues, but they should not block Alpha1:
- SCM may not exit chill-mode if DataNodes started before it (startup order 
dependency?)
- DataNode logs are not going to the correct location.

-Arpit


On 2018/09/19, 2:49 PM, "Elek, Marton"  wrote:

Hi all,

After the recent discussion about the first Ozone release I've created 
the first release candidate (RC0) for Apache Hadoop Ozone 0.2.1-alpha.

This release is alpha quality: it’s not recommended to use in production 
but we believe that it’s stable enough to try it out the feature set and 
collect feedback.

The RC artifacts are available from: 
https://home.apache.org/~elek/ozone-0.2.1-alpha-rc0/

The RC tag in git is: ozone-0.2.1-alpha-RC0 (968082ffa5d)

Please try the release and vote; the vote will run for the usual 5 
working days, ending on September 26, 2018 10pm UTC time.

The easiest way to try it out is:

1. Download the binary artifact
2. Read the docs at ./docs/index.html
3. TLDR; cd compose/ozone && docker-compose up -d


Please try it out, vote, or just give us feedback.

Thank you very much,
Marton

ps: At next week, we will have a BoF session at ApacheCon North Europe, 
Montreal on Monday evening. Please join, if you are interested, or need 
support to try out the package or just have any feedback.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org





[jira] [Created] (HADOOP-15727) Missing dependency errors from dist-tools-hooks-maker

2018-09-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15727:
--

 Summary: Missing dependency errors from dist-tools-hooks-maker
 Key: HADOOP-15727
 URL: https://issues.apache.org/jira/browse/HADOOP-15727
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.2.0
Reporter: Arpit Agarwal


Building Hadoop with -Pdist -Dtar generates the following errors. These don't 
stop the build from succeeding though.

{code}
ERROR: hadoop-azure has missing dependencies: 
jetty-util-ajax-9.3.19.v20170502.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-common-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-hdfs-client-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: okhttp-2.7.5.jar
ERROR: hadoop-resourceestimator has missing dependencies: okio-1.6.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: jersey-client-1.19.jar
ERROR: hadoop-resourceestimator has missing dependencies: guice-servlet-4.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: guice-4.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: aopalliance-1.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: jersey-guice-1.19.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
jackson-module-jaxb-annotations-2.9.5.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
jackson-jaxrs-json-provider-2.9.5.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
jackson-jaxrs-base-2.9.5.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-api-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-server-resourcemanager-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
jetty-util-ajax-9.3.19.v20170502.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-server-common-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-registry-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
commons-daemon-1.0.13.jar
ERROR: hadoop-resourceestimator has missing dependencies: dnsjava-2.1.7.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
geronimo-jcache_1.0_spec-1.0-alpha-1.jar
ERROR: hadoop-resourceestimator has missing dependencies: ehcache-3.3.1.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
HikariCP-java7-2.4.12.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-server-applicationhistoryservice-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: objenesis-1.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: fst-2.50.jar
ERROR: hadoop-resourceestimator has missing dependencies: java-util-1.9.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: json-io-2.5.1.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-server-web-proxy-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: leveldbjni-all-1.8.jar
ERROR: hadoop-resourceestimator has missing dependencies: javax.inject-1.jar
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12558) distcp documentation is woefully out of date

2018-09-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-12558:


Reopening to evaluate if this needs a fix and bumping priority.

> distcp documentation is woefully out of date
> 
>
> Key: HADOOP-12558
> URL: https://issues.apache.org/jira/browse/HADOOP-12558
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools/distcp
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: newbie
>
> There are a ton of distcp tune-ables that have zero documentation outside of 
> the source code.  This should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: HADOOP-14163 proposal for new hadoop.apache.org

2018-09-04 Thread Arpit Agarwal
Requested a new git repo for the site: 
https://gitbox.apache.org/repos/asf/hadoop-site.git


On 9/4/18, 1:33 PM, "Mingliang Liu"  wrote:

It might be late but I'm +1 on the new site and transition proposal.

Thanks Marton.

On Fri, Aug 31, 2018 at 1:07 AM Elek, Marton  wrote:

> Bumping this thread at last time.
>
> I have the following proposal:
>
> 1. I will request a new git repository hadoop-site.git and import the
> new site to there (which has exactly the same content as the existing
> site).
>
> 2. I will ask infra to use the new repository as the source of
> hadoop.apache.org
>
> 3. I will sync manually all of the changes in the next two months back
> to the svn site from the git (release announcements, new committers)
>
> IN CASE OF ANY PROBLEM we can switch back to the svn without any problem.
>
> If no-one objects within three days, I'll assume lazy consensus and
> start with this plan. Please comment if you have objections.
>
> Again: it allows immediate fallback at any time as svn repo will be kept
> as is (+ I will keep it up-to-date in the next 2 months)
>
> Thanks,
> Marton
>
>
> On 06/21/2018 09:00 PM, Elek, Marton wrote:
> >
> > Thank you very much to bump up this thread.
> >
> >
> > About [2]: (Just for the clarification) the content of the proposed
> > website is exactly the same as the old one.
> >
> > About [1]. I believe that the "mvn site" is perfect for the
> > documentation but for website creation there are more simple and
> > powerful tools.
> >
> > Hugo has more simple compared to jekyll. Just one binary, without
> > dependencies, works everywhere (mac, linux, windows)
> >
> > Hugo has much more powerful compared to "mvn site". Easier to create/use
> > more modern layout/theme, and easier to handle the content (for example
> > new release announcements could be generated as part of the release
> > process)
> >
> > I think it's very low risk to try out a new approach for the site (and
> > easy to rollback in case of problems)
> >
> > Marton
> >
> > ps: I just updated the patch/preview site with the recent releases:
> >
> > ***
> > * http://hadoop.anzix.net *
> > ***
> >
> > On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli wrote:
> >> Got pinged about this offline.
> >>
> >> Thanks for keeping at it, Marton!
> >>
> >> I think there are two road-blocks here
> >>   (1) Is the mechanism using which the website is built good enough -
> >> mvn-site / hugo etc?
> >>   (2) Is the new website good enough?
> >>
> >> For (1), I just think we need more committer attention and get
> >> feedback rapidly and get it in.
> >>
> >> For (2), how about we do it in a different way in the interest of
> >> progress?
> >>   - We create a hadoop.apache.org/new-site/ where this new site goes.
> >>   - We then modify the existing web-site to say that there is a new
> >> site/experience that folks can click on a link and navigate to
> >>   - As this new website matures and gets feedback & fixes, we finally
> >> pull the plug at a later point of time when we think we are good to go.
> >>
> >> Thoughts?
> >>
> >> +Vinod
> >>
> >>> On Feb 16, 2018, at 3:10 AM, Elek, Marton  wrote:
> >>>
> >>> Hi,
> >>>
> >>> I would like to bump this thread up.
> >>>
> >>> TLDR; There is a proposed version of a new hadoop site which is
> >>> available from here: https://elek.github.io/hadoop-site-proposal/ and
> >>> https://issues.apache.org/jira/browse/HADOOP-14163
> >>>
> >>> Please let me know what you think about it.
> >>>
> >>>
> >>> Longer version:
> >>>
> >>> This thread started long time ago to use a more modern hadoop site:
> >>>
> >>> Goals were:
> >>>
> >>> 1. To make it easier to manage it (the release entries could be
> >>> created by a script as part of the release process)
> >>> 2. To use a better look-and-feel
> >>> 3. Move it out from svn to git
> >>>
> >>> I proposed to:
> >>>
> >>> 1. Move the existing site to git and generate it with hugo (which is
> >>> a single, standalone binary)
> >>> 2. Move both the rendered and source branches to git.
> >>> 3. (Create a jenkins job to generate the site automatically)
> >>>
> >>> NOTE: this is just about forrest based hadoop.apache.org, NOT about
> >>> the documentation which is generated by mvn-site (as before)
> >>>
> >>>
> >>> I got multiple valuable feedback and I improved the proposed site
> >>> according to the comments. Allen had some concerns about the used
> >>> technologies (hugo vs. mvn-site) and I answered all th

Re: HADOOP-14163 proposal for new hadoop.apache.org

2018-08-31 Thread Arpit Agarwal
+1

Thanks for initiating this Marton.


On 8/31/18, 1:07 AM, "Elek, Marton"  wrote:

Bumping this thread at last time.

I have the following proposal:

1. I will request a new git repository hadoop-site.git and import the 
new site to there (which has exactly the same content as the existing site).

2. I will ask infra to use the new repository as the source of 
hadoop.apache.org

3. I will sync manually all of the changes in the next two months back 
to the svn site from the git (release announcements, new committers)

IN CASE OF ANY PROBLEM we can switch back to the svn without any problem.

If no-one objects within three days, I'll assume lazy consensus and 
start with this plan. Please comment if you have objections.

Again: it allows immediate fallback at any time as svn repo will be kept 
as is (+ I will keep it up-to-date in the next 2 months)

Thanks,
Marton


On 06/21/2018 09:00 PM, Elek, Marton wrote:
> 
> Thank you very much to bump up this thread.
> 
> 
> About [2]: (Just for the clarification) the content of the proposed 
> website is exactly the same as the old one.
> 
> About [1]. I believe that the "mvn site" is perfect for the 
> documentation but for website creation there are more simple and 
> powerful tools.
> 
> Hugo has more simple compared to jekyll. Just one binary, without 
> dependencies, works everywhere (mac, linux, windows)
> 
> Hugo has much more powerful compared to "mvn site". Easier to create/use 
> more modern layout/theme, and easier to handle the content (for example 
> new release announcements could be generated as part of the release 
> process)
> 
> I think it's very low risk to try out a new approach for the site (and 
> easy to rollback in case of problems)
> 
> Marton
> 
> ps: I just updated the patch/preview site with the recent releases:
> 
> ***
> * http://hadoop.anzix.net *
> ***
> 
> On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli wrote:
>> Got pinged about this offline.
>>
>> Thanks for keeping at it, Marton!
>>
>> I think there are two road-blocks here
>>   (1) Is the mechanism using which the website is built good enough - 
>> mvn-site / hugo etc?
>>   (2) Is the new website good enough?
>>
>> For (1), I just think we need more committer attention and get 
>> feedback rapidly and get it in.
>>
>> For (2), how about we do it in a different way in the interest of 
>> progress?
>>   - We create a hadoop.apache.org/new-site/ where this new site goes.
>>   - We then modify the existing web-site to say that there is a new 
>> site/experience that folks can click on a link and navigate to
>>   - As this new website matures and gets feedback & fixes, we finally 
>> pull the plug at a later point of time when we think we are good to go.
>>
>> Thoughts?
>>
>> +Vinod
>>
>>> On Feb 16, 2018, at 3:10 AM, Elek, Marton  wrote:
>>>
>>> Hi,
>>>
>>> I would like to bump this thread up.
>>>
>>> TLDR; There is a proposed version of a new hadoop site which is 
>>> available from here: https://elek.github.io/hadoop-site-proposal/ and 
>>> https://issues.apache.org/jira/browse/HADOOP-14163
>>>
>>> Please let me know what you think about it.
>>>
>>>
>>> Longer version:
>>>
>>> This thread started long time ago to use a more modern hadoop site:
>>>
>>> Goals were:
>>>
>>> 1. To make it easier to manage it (the release entries could be 
>>> created by a script as part of the release process)
>>> 2. To use a better look-and-feel
>>> 3. Move it out from svn to git
>>>
>>> I proposed to:
>>>
>>> 1. Move the existing site to git and generate it with hugo (which is 
>>> a single, standalone binary)
>>> 2. Move both the rendered and source branches to git.
>>> 3. (Create a jenkins job to generate the site automatically)
>>>
>>> NOTE: this is just about forrest based hadoop.apache.org, NOT about 
>>> the documentation which is generated by mvn-site (as before)
>>>
>>>
>>> I got multiple valuable feedback and I improved the proposed site 
>>> according to the comments. Allen had some concerns about the used 
>>> technologies (hugo vs. mvn-site) and I answered all the questions why 
>>> I think mvn-site is the best for documentation and hugo is best for 
>>> generating site.
>>>
>>>
>>> I would like to finish this effort/jira: I would like to start a 
>>> discussion about using this proposed version and approach as a new 
>>> site of Apache Hadoop. Please let me know what you think.
>>>
>>>
>>> Thanks a lot,
>>> Marton
>>>
   

Re: [VOTE] Merge ContainerIO branch (HDDS-48) in to trunk

2018-07-06 Thread Arpit Agarwal
Late +1 for the merge.

Thanks for driving this improvement Hanisha and Bharat.


On 6/29/18, 3:10 PM, "Bharat Viswanadham"  wrote:

Hi All,

Given the positive response to the discussion thread [1], here is the 
formal vote thread to merge HDDS-48 in to trunk.

Summary of code changes:
1. Code changes for this branch are done in the hadoop-hdds subproject and 
hadoop-ozone subproject, there is no impact to hadoop-hdfs.
2. Added support for multiple container types in the datanode code path.
3. Added disk layout logic for the containers to supports future upgrades.
4. Added support for volume Choosing policy to distribute containers across 
disks on the datanode.
5. Changed the format of the .container file to a human-readable format 
(yaml)


 The vote will run for 7 days, ending Fri July 6th. I will start this vote 
with my +1.

Thanks,
Bharat

[1] 
https://lists.apache.org/thread.html/79998ebd2c3837913a22097102efd8f41c3b08cb1799c3d3dea4876b@%3Chdfs-dev.hadoop.apache.org%3E


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-06 Thread Arpit Agarwal
Thanks Sunil!

@Subru and others who voted for force push, are you okay with cancelling this 
vote and declaring trunk open for commits?


On 7/6/18, 11:16 AM, "Sunil G"  wrote:

Thanks. These patches are now restored.

- Sunil


On Fri, Jul 6, 2018 at 11:14 AM Vinod Kumar Vavilapalli 
wrote:

> +1
>
> Thanks
> +Vinod
>
>
> On Jul 6, 2018, at 11:12 AM, Sunil G  wrote:
>
> I just checked.  YARN-7556 and YARN-7451 can be cherry-picked.
> I cherry-picked in my local and compiled. Things are good.
>
> I can push this now  which will restore trunk to its original.
> I can do this if there are no objection.
>
> - Sunil
    >
> On Fri, Jul 6, 2018 at 11:10 AM Arpit Agarwal 
> wrote:
>
> afaict YARN-8435 is still in trunk. YARN-7556 and YARN-7451 are not.
>
>
> From: Giovanni Matteo Fumarola 
> Date: Friday, July 6, 2018 at 10:59 AM
> To: Vinod Kumar Vavilapalli 
> Cc: Anu Engineer , Arpit Agarwal <
> aagar...@hortonworks.com>, "su...@apache.org" , "
> yarn-...@hadoop.apache.org" , "
> hdfs-...@hadoop.apache.org" , "
> common-dev@hadoop.apache.org" , "
> mapreduce-...@hadoop.apache.org" 
> Subject: Re: [VOTE] reset/force push to clean up inadvertent merge commit
> pushed to trunk
>
> Everything seems ok except the 3 commits: YARN-8435, YARN-7556, YARN-7451
> are not anymore in trunk due to the revert.
>
> Haibo/Robert if you can recommit your patches I will commit mine
> subsequently to preserve the original order.
>
> (My apology for the mess I did with the merge commit)
>
> On Fri, Jul 6, 2018 at 10:42 AM, Vinod Kumar Vavilapalli <
> vino...@apache.org<mailto:vino...@apache.org >> wrote:
> I will add that the branch also successfully compiles.
>
> Let's just move forward as is, unblock commits and just fix things if
> anything is broken.
>
> +Vinod
>
> On Jul 6, 2018, at 10:30 AM, Anu Engineer 
> <mailto:aengin...@hortonworks.com >> wrote:
>
>
> Hi All,
>
> [ Thanks to Arpit for working offline and verifying that branch is
>
> indeed good.]
>
>
> I want to summarize what I know of this issue and also solicit other
>
> points of view.
>
>
> We reverted the commit(c163d1797) from the branch, as soon as we noticed
>
> it. That is, we have made no other commits after the merge commit.
>
>
> We used the following command to revert
> git revert -c c163d1797ade0f47d35b4a44381b8ef1dfec5b60 -m 1
>
> Giovanni's branch had three commits + merge, The JIRAs he had were
>
> YARN-7451, YARN-7556, YARN-8435.
>
>
> The issue seems to be the revert of merge has some diffs. I am not a
>
> YARN developer, so the only problem is to look at the revert and see if
> there were any spurious edits in Giovanni's original commit + merge.
>
> If there are none, we don't need a reset/force push.  But if we find an
>
> issue I am more than willing to go the force commit route.
>
>
> The revert takes the trunk back to the point of the first commit from
>
> Giovanni which is YARN-8435. His branch was also rewriting the order of
> commits which we have lost due to the revert.
>
>
> Based on what I know so far, I am -1 on the force push.
>
> In other words, I am trying to understand why we need the force push. I
>
> have left a similar comment in JIRA (
> https://issues.apache.org/jira/browse/INFRA-16727) too.
>
>
>
> Thanks
> Anu
>
>
> On 7/6/18, 10:24 AM, "Arpit Agarwal" 
> aagar...@hortonworks.com>> wrote:
>
>
>   -1 for the force push. Nothing is broken in trunk. The history looks
>
> ugly for two commits and we can live with it.
>
>
>   The revert restored the branch to Giovanni's intent. i.e. only
>
> YARN-8435 is applied. Verified there is no delta between hashes 0d9804d 
and
> 39ad989 (HEAD).
>
>
>   39ad989 2018-07-05 aengineer@ o {apache/trunk} Revert "Merge branch
>
> 't...
>
>   c163d17 2018-07-05 gifuma@apa M─┐ Merge branch 'trunk' of
>
> https://git-...
>
>   99febe7 2018-07-05 rkanter@ap │ o YARN-7451. Add missing tests to
>
>

Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-06 Thread Arpit Agarwal
+1 from me.

Also +1 to reopen trunk for all commits.


On 7/6/18, 11:12 AM, "Sunil G"  wrote:

I just checked.  YARN-7556 and YARN-7451 can be cherry-picked.
I cherry-picked in my local and compiled. Things are good.

I can push this now  which will restore trunk to its original.
I can do this if there are no objection.

- Sunil

On Fri, Jul 6, 2018 at 11:10 AM Arpit Agarwal 
wrote:

> afaict YARN-8435 is still in trunk. YARN-7556 and YARN-7451 are not.
>
>
> From: Giovanni Matteo Fumarola 
> Date: Friday, July 6, 2018 at 10:59 AM
> To: Vinod Kumar Vavilapalli 
    > Cc: Anu Engineer , Arpit Agarwal <
> aagar...@hortonworks.com>, "su...@apache.org" , "
> yarn-...@hadoop.apache.org" , "
> hdfs-...@hadoop.apache.org" , "
> common-dev@hadoop.apache.org" , "
> mapreduce-...@hadoop.apache.org" 
> Subject: Re: [VOTE] reset/force push to clean up inadvertent merge commit
> pushed to trunk
>
> Everything seems ok except the 3 commits: YARN-8435, YARN-7556, YARN-7451
> are not anymore in trunk due to the revert.
>
> Haibo/Robert if you can recommit your patches I will commit mine
> subsequently to preserve the original order.
>
> (My apology for the mess I did with the merge commit)
>
> On Fri, Jul 6, 2018 at 10:42 AM, Vinod Kumar Vavilapalli <
> vino...@apache.org<mailto:vino...@apache.org>> wrote:
> I will add that the branch also successfully compiles.
>
> Let's just move forward as is, unblock commits and just fix things if
> anything is broken.
>
> +Vinod
>
> > On Jul 6, 2018, at 10:30 AM, Anu Engineer  <mailto:aengin...@hortonworks.com>> wrote:
> >
> > Hi All,
> >
> > [ Thanks to Arpit for working offline and verifying that branch is
> indeed good.]
> >
> > I want to summarize what I know of this issue and also solicit other
> points of view.
> >
> > We reverted the commit(c163d1797) from the branch, as soon as we noticed
> it. That is, we have made no other commits after the merge commit.
> >
> > We used the following command to revert
> > git revert -c c163d1797ade0f47d35b4a44381b8ef1dfec5b60 -m 1
> >
> > Giovanni's branch had three commits + merge, The JIRAs he had were
> YARN-7451, YARN-7556, YARN-8435.
> >
> > The issue seems to be the revert of merge has some diffs. I am not a
> YARN developer, so the only problem is to look at the revert and see if
> there were any spurious edits in Giovanni's original commit + merge.
> > If there are none, we don't need a reset/force push.  But if we find an
> issue I am more than willing to go the force commit route.
> >
> > The revert takes the trunk back to the point of the first commit from
> Giovanni which is YARN-8435. His branch was also rewriting the order of
> commits which we have lost due to the revert.
> >
    > > Based on what I know so far, I am -1 on the force push.
> >
> > In other words, I am trying to understand why we need the force push. I
> have left a similar comment in JIRA (
> https://issues.apache.org/jira/browse/INFRA-16727) too.
> >
> >
> > Thanks
> > Anu
> >
> >
> > On 7/6/18, 10:24 AM, "Arpit Agarwal"  aagar...@hortonworks.com>> wrote:
> >
> >-1 for the force push. Nothing is broken in trunk. The history looks
> ugly for two commits and we can live with it.
> >
> >The revert restored the branch to Giovanni's intent. i.e. only
> YARN-8435 is applied. Verified there is no delta between hashes 0d9804d 
and
> 39ad989 (HEAD).
> >
> >39ad989 2018-07-05 aengineer@ o {apache/trunk} Revert "Merge branch
> 't...
> >c163d17 2018-07-05 gifuma@apa M─┐ Merge branch 'trunk' of
> https://git-...
> >99febe7 2018-07-05 rkanter@ap │ o YARN-7451. Add missing tests to
> veri...
> >1726247 2018-07-05 haibochen@ │ o YARN-7556. Fair scheduler
> configurat...
> >0d9804d 2018-07-05 gifuma@apa o │ YARN-8435. Fix NPE when the same
> cli...
> >71df8c2 2018-07-05 nanda@apac o─┘ HDDS-212. Introduce
> NodeStateManager...
> >
> >Regards,
> >Arpit
> >
> >
> >On 7/5/18, 2:37 PM, "Subru Krishnan"  su..

Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-06 Thread Arpit Agarwal
afaict YARN-8435 is still in trunk. YARN-7556 and YARN-7451 are not.


From: Giovanni Matteo Fumarola 
Date: Friday, July 6, 2018 at 10:59 AM
To: Vinod Kumar Vavilapalli 
Cc: Anu Engineer , Arpit Agarwal 
, "su...@apache.org" , 
"yarn-...@hadoop.apache.org" , 
"hdfs-...@hadoop.apache.org" , 
"common-dev@hadoop.apache.org" , 
"mapreduce-...@hadoop.apache.org" 
Subject: Re: [VOTE] reset/force push to clean up inadvertent merge commit 
pushed to trunk

Everything seems ok except the 3 commits: YARN-8435, YARN-7556, YARN-7451 are 
not anymore in trunk due to the revert.

Haibo/Robert if you can recommit your patches I will commit mine subsequently 
to preserve the original order.

(My apology for the mess I did with the merge commit)

On Fri, Jul 6, 2018 at 10:42 AM, Vinod Kumar Vavilapalli 
mailto:vino...@apache.org>> wrote:
I will add that the branch also successfully compiles.

Let's just move forward as is, unblock commits and just fix things if anything 
is broken.

+Vinod

> On Jul 6, 2018, at 10:30 AM, Anu Engineer 
> mailto:aengin...@hortonworks.com>> wrote:
>
> Hi All,
>
> [ Thanks to Arpit for working offline and verifying that branch is indeed 
> good.]
>
> I want to summarize what I know of this issue and also solicit other points 
> of view.
>
> We reverted the commit(c163d1797) from the branch, as soon as we noticed it. 
> That is, we have made no other commits after the merge commit.
>
> We used the following command to revert
> git revert -c c163d1797ade0f47d35b4a44381b8ef1dfec5b60 -m 1
>
> Giovanni's branch had three commits + merge, The JIRAs he had were YARN-7451, 
> YARN-7556, YARN-8435.
>
> The issue seems to be the revert of merge has some diffs. I am not a YARN 
> developer, so the only problem is to look at the revert and see if there were 
> any spurious edits in Giovanni's original commit + merge.
> If there are none, we don't need a reset/force push.  But if we find an issue 
> I am more than willing to go the force commit route.
>
> The revert takes the trunk back to the point of the first commit from 
> Giovanni which is YARN-8435. His branch was also rewriting the order of 
> commits which we have lost due to the revert.
>
> Based on what I know so far, I am -1 on the force push.
>
> In other words, I am trying to understand why we need the force push. I have 
> left a similar comment in JIRA 
> (https://issues.apache.org/jira/browse/INFRA-16727) too.
>
>
> Thanks
> Anu
>
>
> On 7/6/18, 10:24 AM, "Arpit Agarwal" 
> mailto:aagar...@hortonworks.com>> wrote:
>
>-1 for the force push. Nothing is broken in trunk. The history looks ugly 
> for two commits and we can live with it.
>
>The revert restored the branch to Giovanni's intent. i.e. only YARN-8435 
> is applied. Verified there is no delta between hashes 0d9804d and 39ad989 
> (HEAD).
>
>39ad989 2018-07-05 aengineer@ o {apache/trunk} Revert "Merge branch 't...
>c163d17 2018-07-05 gifuma@apa M─┐ Merge branch 'trunk' of https://git-...
>99febe7 2018-07-05 rkanter@ap │ o YARN-7451. Add missing tests to veri...
>1726247 2018-07-05 haibochen@ │ o YARN-7556. Fair scheduler configurat...
>0d9804d 2018-07-05 gifuma@apa o │ YARN-8435. Fix NPE when the same cli...
>71df8c2 2018-07-05 nanda@apac o─┘ HDDS-212. Introduce NodeStateManager...
>
>Regards,
>Arpit
>
>
>On 7/5/18, 2:37 PM, "Subru Krishnan" 
> mailto:su...@apache.org>> wrote:
>
>Folks,
>
>There was a merge commit accidentally pushed to trunk, you can find the
>details in the mail thread [1].
>
>I have raised an INFRA ticket [2] to reset/force push to clean up 
> trunk.
>
>Can we have a quick vote for INFRA sign-off to proceed as this is 
> blocking
>all commits?
>
>Thanks,
>Subru
>
>[1]
>
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E
>[2] https://issues.apache.org/jira/browse/INFRA-16727
>
>
>
>-
>To unsubscribe, e-mail: 
> common-dev-unsubscr...@hadoop.apache.org<mailto:common-dev-unsubscr...@hadoop.apache.org>
>For additional commands, e-mail: 
> common-dev-h...@hadoop.apache.org<mailto:common-dev-h...@hadoop.apache.org>
>
>
>
> -
> To unsubscribe, e-mail: 
> common-dev-unsubscr...@hadoop.apache.org<mailto:common-dev-unsubscr...@hadoop.apache.org>
> For additional commands, e-mail: 
> common-dev-h...@hadoop.apache.org<mailto:common-dev-h...@hadoop.apache.org>

-
To unsubscribe, e-mail: 
yarn-dev-unsubscr...@hadoop.apache.org<mailto:yarn-dev-unsubscr...@hadoop.apache.org>
For additional commands, e-mail: 
yarn-dev-h...@hadoop.apache.org<mailto:yarn-dev-h...@hadoop.apache.org>



Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-06 Thread Arpit Agarwal
-1 for the force push. Nothing is broken in trunk. The history looks ugly for 
two commits and we can live with it.

The revert restored the branch to Giovanni's intent. i.e. only YARN-8435 is 
applied. Verified there is no delta between hashes 0d9804d and 39ad989 (HEAD).

39ad989 2018-07-05 aengineer@ o {apache/trunk} Revert "Merge branch 't...
c163d17 2018-07-05 gifuma@apa M─┐ Merge branch 'trunk' of https://git-...
99febe7 2018-07-05 rkanter@ap │ o YARN-7451. Add missing tests to veri...
1726247 2018-07-05 haibochen@ │ o YARN-7556. Fair scheduler configurat...
0d9804d 2018-07-05 gifuma@apa o │ YARN-8435. Fix NPE when the same cli...
71df8c2 2018-07-05 nanda@apac o─┘ HDDS-212. Introduce NodeStateManager...

Regards,
Arpit


On 7/5/18, 2:37 PM, "Subru Krishnan"  wrote:

Folks,

There was a merge commit accidentally pushed to trunk, you can find the
details in the mail thread [1].

I have raised an INFRA ticket [2] to reset/force push to clean up trunk.

Can we have a quick vote for INFRA sign-off to proceed as this is blocking
all commits?

Thanks,
Subru

[1]

http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E
[2] https://issues.apache.org/jira/browse/INFRA-16727




Re: [DISCUSS]Merge ContainerIO branch (HDDS-48) in to trunk

2018-06-29 Thread Arpit Agarwal
+1 for merging this branch.

Added common-dev@


On 6/28/18, 3:16 PM, "Bharat Viswanadham"  wrote:

Hi everyone,

I’d like to start a thread to discuss merging the HDDS-48 branch to trunk. 
The ContainerIO work refactors the HDDS Datanode IO path to enforce clean 
separation between the Container management and the Storage layers.

Note: HDDS/Ozone code is not compiled by default in trunk. The 'hdds' maven 
profile must be enabled to compile the branch payload.
 
The merge payload includes the following key improvements:
1. Support multiple container types on the datanode.
2. Adopt a new disk layout for the containers that supports future upgrades.
3. Support volume Choosing policy for container data locations.
4. Changed the format of the .container file to a human-readable format 
(yaml)
 
Below are the links for design documents attached to HDDS-48.
 

https://issues.apache.org/jira/secure/attachment/12923107/ContainerIO-StorageManagement-DesignDoc.pdf
https://issues.apache.org/jira/secure/attachment/12923108/HDDS DataNode 
Disk Layout.pdf
 
The branch is ready to merge. Over the next week we will clean up the 
unused classes, fix old integration tests and continue testing the changes.
 
Thanks to Hanisha Koneru, Arpit Agarwal, Anu Engineer, Jitendra Pandey,  
Xiaoyu Yao, Ajay Kumar, Mukul Kumar Singh, Marton Elek and Shashikant Banerjee 
for their contributions in design, development and code reviews.

Thanks,
Bharat



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





Re: HADOOP-15124 review

2018-06-27 Thread Arpit Agarwal
Hi Igor, it is perfectly fine to request a code review on the dev mailing list.


From: Igor Dvorzhak 
Date: Tuesday, June 26, 2018 at 9:27 PM
To: 
Cc: , 
Subject: Re: HADOOP-15124 review

Hi Yiqun,

Thank you for the explanation. I didn't know that this is not appropriate and 
will not do so in future.

Thanks,
Igor


On Tue, Jun 26, 2018 at 7:18 PM Lin,Yiqun(vip.com) 
mailto:yiqun01@vipshop.com>> wrote:
Hi Igor,

It’s not appropriate to ask for a review request in dev mailing list. Dev 
mailing list is mainly used for discussing and answering user’s questions. You 
can ask for the review under specific JIRA, that will be seen by committers or 
others. If they have time, they will help take the review.

Yiqun
Thanks

发件人: Igor Dvorzhak 
[mailto:i...@google.com.INVALID]
发送时间: 2018年6月26日 23:52
收件人: hdfs-...@hadoop.apache.org; 
common-dev@hadoop.apache.org
主题: Re: HADOOP-15124 review

+common-dev@hadoop.apache.org>

On Tue, Jun 26, 2018 at 8:49 AM Igor Dvorzhak 
mailto:i...@google.com>>>
 wrote:
Hello,

I have a patch that 
improves FileSystem.Statistics implementation and I would like to commit it.

May somebody review it?

Best regards,
Igor Dvorzhak
本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
 This communication is intended only for the addressee(s) and may contain 
information that is privileged and confidential. You are hereby notified that, 
if you are not an intended recipient listed above, or an authorized employee or 
agent of an addressee of this communication responsible for delivering e-mail 
messages to an intended recipient, any dissemination, distribution or 
reproduction of this communication (including any attachments hereto) is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by a reply e-mail addressed to the sender and permanently 
delete the original e-mail communication and any attachments from all storage 
devices without making or otherwise retaining a copy.


[jira] [Created] (HADOOP-15493) DiskChecker should handle disk full situation

2018-05-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15493:
--

 Summary: DiskChecker should handle disk full situation
 Key: HADOOP-15493
 URL: https://issues.apache.org/jira/browse/HADOOP-15493
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


DiskChecker#checkDirWithDiskIo creates a file to verify that the disk is 
writable.

However check should not fail when file creation fails due to disk being full. 
This avoids marking full disks as _failed_.

Reported by [~kihwal] and [~daryn] in HADOOP-15450. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15451) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15451:
--

 Summary: Avoid fsync storm triggered by DiskChecker and handle 
disk full situation
 Key: HADOOP-15451
 URL: https://issues.apache.org/jira/browse/HADOOP-15451
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Arpit Agarwal


Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
# When space is low, the os returns ENOSPC. Instead simply stop writing, the 
drive is marked bad and replication happens. This make cluster-wide space 
problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
datanode shuts down.
# There are non-hdfs users of DiskChecker, who use it proactively, not just on 
failures. This was fine before, but now it incurs heavy I/O due to introduction 
of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15450:
--

 Summary: Avoid fsync storm triggered by DiskChecker and handle 
disk full situation
 Key: HADOOP-15450
 URL: https://issues.apache.org/jira/browse/HADOOP-15450
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
1. When space is low, the os returns ENOSPC. Instead simply stop writing, the 
drive is marked bad and replication happens. This make cluster-wide space 
problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
datanode shuts down.
1. There are non-hdfs users of DiskChecker, who use it proactively, not just on 
failures. This was fine before, but now it incurs heavy I/O due to introduction 
of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Arpit Agarwal
Thanks Wangda, I see the shaded jars now.

Are the repo jars required to be the same as the binary release? They don’t 
match right now, probably they got rebuilt.

+1 (binding), modulo that remaining question.

* Verified signatures
* Verified checksums for source and binary artefacts
* Sanity checked jars on r.a.o. 
* Built from source
* Deployed to 3 node secure cluster with NameNode HA
* Verified HDFS web UIs
* Tried out HDFS shell commands
* Ran sample MapReduce jobs

Thanks!


--
From: Wangda Tan 
Date: Monday, April 2, 2018 at 9:25 PM
To: Arpit Agarwal 
Cc: Gera Shegalov , Sunil G , 
"yarn-...@hadoop.apache.org" , Hdfs-dev 
, Hadoop Common , 
"mapreduce-...@hadoop.apache.org" , Vinod 
Kumar Vavilapalli 
Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

As pointed by Arpit, the previously deployed shared jars are incorrect. Just 
redeployed jars and staged. @Arpit, could you please check the updated Maven 
repo? https://repository.apache.org/content/repositories/orgapachehadoop-1092 

Since the jars inside binary tarballs are correct 
(http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't need 
roll another RC, just update Maven repo should be sufficient. 

Best,
Wangda


On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan <mailto:wheele...@gmail.com> wrote:
Hi Arpit, 

Thanks for pointing out this.

I just removed all .md5 files from artifacts. I found md5 checksums still exist 
in .mds files and I didn't remove them from .mds file because it is generated 
by create-release script and Apache guidance is "should not" instead of "must 
not". Please let me know if you think they need to be removed as well. 

- Wangda



On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal <mailto:aagar...@hortonworks.com> 
wrote:
Thanks for putting together this RC, Wangda. 

The guidance from Apache is to omit MD5s, specifically:
  > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).

https://www.apache.org/dev/release-distribution#sigs-and-sums

 


On Apr 2, 2018, at 7:03 AM, Wangda Tan <mailto:wheele...@gmail.com> wrote:

Hi Gera,

It's my bad, I thought only src/bin tarball is enough.

I just uploaded all other things under artifact/ to
http://people.apache.org/~wangda/hadoop-3.1.0-RC1/

Please let me know if you have any other comments.

Thanks,
Wangda


On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov <mailto:ger...@gmail.com> wrote:


Thanks, Wangda!

There are many more artifacts in previous votes, e.g., see
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
site tarball is missing.

On Sun, Apr 1, 2018 at 11:54 PM Sunil G <mailto:sun...@apache.org> wrote:


Thanks Wangda for initiating the release.

I tested this RC built from source file.


  - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
UI.
  - Below feature sanity is done
 - Application priority
 - Application timeout
 - Intra Queue preemption with priority based
 - DS based affinity tests to verify placement constraints.
  - Tested basic NodeLabel scenarios.
 - Added couple of labels to few of nodes and behavior is coming
 correct.
 - Verified old UI  and new YARN UI for labels.
 - Submitted apps to labelled cluster and it works fine.
 - Also performed few cli commands related to nodelabel.
  - Test basic HA cases and seems correct.
  - Tested new YARN UI . All pages are getting loaded correctly.


- Sunil

On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan <mailto:wheele...@gmail.com> wrote:


Hi folks,

Thanks to the many who helped with this release since Dec 2017 [1].
We've

created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:

http://people.apache.org/~wangda/hadoop-3.1.0-RC1

The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d

The maven artifacts are available via http://repository.apache.org at
https://repository.apache.org/content/repositories/
orgapachehadoop-1090/

This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.

3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
include the first class GPU/FPGA support on YARN, Native services,
Support

rich placement constraints in YARN, S3-related enhancements, allow HDFS
block replicas to be provided by an external storage system, etc.

For 3.1.0 RC0 vote discussion, please see [3].

We’d like to use this as a starting release for 3.1.x [1], depending on
how

it goes, get it stabilized and potentially use a 3.1.1 in several weeks
as

the stable release.

We have done testing with a pseudo cluster:
- Ran distributed job.
- GPU scheduling/isolation.
- Placement constraints (intra-application anti-affinity) by using
distributed shell.

My +1 to start.

Best,
Wangda/Vinod

[1]

https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd10

Re: [VOTE] Release Apache Hadoop 3.0.1 (RC1)

2018-04-02 Thread Arpit Agarwal
Hi Lei,

It looks like the release artefacts have dummy shaded jars. E.g.

Repository Path:  
/org/apache/hadoop/hadoop-client-runtime/3.0.1/hadoop-client-runtime-3.0.1.jar
Uploaded by:  lei
Size: 44.47 KB
Uploaded Date:Fri Mar 16 2018 15:50:42 GMT-0700 (PDT)
Last Modified:Fri Mar 16 2018 15:50:42 GMT-0700 (PDT)

https://repository.apache.org/index.html#view-repositories;releases~browsestorage~/org/apache/hadoop/hadoop-client-runtime/3.0.1/hadoop-client-runtime-3.0.1.jar

Am I looking at this wrong or is this supposed to be the shaded jar which is 
~20MB?

Thanks,
Arpit



On 3/23/18, 10:18 AM, "Lei Xu"  wrote:

Hi, All

Thanks everyone for voting! The vote passes successfully with 6
binding +1s, 7 non-binding +1s and no -1s.

I will work on the staging and releases.

Best,


On Fri, Mar 23, 2018 at 5:10 AM, Kuhu Shukla  
wrote:
> +1 (non-binding)
>
> Built from source.
> Installed on a pseudo distributed cluster.
> Ran word count job and basic hdfs commands.
>
> Thank you for the effort on this release.
>
> Regards,
> Kuhu
>
> On Thu, Mar 22, 2018 at 5:25 PM, Elek, Marton  wrote:
>
>>
>> +1 (non binding)
>>
>> I did a full build from source code, created a docker container and did
>> various basic level tests with robotframework based automation and
>> docker-compose based pseudo clusters[1].
>>
>> Including:
>>
>> * Hdfs federation smoke test
>> * Basic ViewFS configuration
>> * Yarn example jobs
>> * Spark example jobs (with and without yarn)
>> * Simple hive table creation
>>
>> Marton
>>
>>
>> [1]: https://github.com/flokkr/runtime-compose
>>
>> On 03/18/2018 05:11 AM, Lei Xu wrote:
>>
>>> Hi, all
>>>
>>> I've created release candidate RC-1 for Apache Hadoop 3.0.1
>>>
>>> Apache Hadoop 3.0.1 will be the first bug fix release for Apache
>>> Hadoop 3.0 release. It includes 49 bug fixes and security fixes, which
>>> include 12
>>> blockers and 17 are critical.
>>>
>>> Please note:
>>> * HDFS-12990. Change default NameNode RPC port back to 8020. It makes
>>> incompatible changes to Hadoop 3.0.0.  After 3.0.1 releases, Apache
>>> Hadoop 3.0.0 will be deprecated due to this change.
>>>
>>> The release page is:
>>> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0+Release
>>>
>>> New RC is available at: http://home.apache.org/~lei/hadoop-3.0.1-RC1/
>>>
>>> The git tag is release-3.0.1-RC1, and the latest commit is
>>> 496dc57cc2e4f4da117f7a8e3840aaeac0c1d2d0
>>>
>>> The maven artifacts are available at:
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1081/
>>>
>>> Please try the release and vote; the vote will run for the usual 5
>>> days, ending on 3/22/2017 6pm PST time.
>>>
>>> Thanks!
>>>
>>> -
>>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>>
>>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>



-- 
Lei (Eddy) Xu
Software Engineer, Cloudera

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org





Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-02 Thread Arpit Agarwal
Thanks for putting together this RC, Wangda.

The guidance from Apache is to omit MD5s, specifically:
  > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).

https://www.apache.org/dev/release-distribution#sigs-and-sums



On Apr 2, 2018, at 7:03 AM, Wangda Tan 
mailto:wheele...@gmail.com>> wrote:

Hi Gera,

It's my bad, I thought only src/bin tarball is enough.

I just uploaded all other things under artifact/ to
http://people.apache.org/~wangda/hadoop-3.1.0-RC1/

Please let me know if you have any other comments.

Thanks,
Wangda


On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov  wrote:

Thanks, Wangda!

There are many more artifacts in previous votes, e.g., see
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
site tarball is missing.

On Sun, Apr 1, 2018 at 11:54 PM Sunil G  wrote:

Thanks Wangda for initiating the release.

I tested this RC built from source file.


  - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
UI.
  - Below feature sanity is done
 - Application priority
 - Application timeout
 - Intra Queue preemption with priority based
 - DS based affinity tests to verify placement constraints.
  - Tested basic NodeLabel scenarios.
 - Added couple of labels to few of nodes and behavior is coming
 correct.
 - Verified old UI  and new YARN UI for labels.
 - Submitted apps to labelled cluster and it works fine.
 - Also performed few cli commands related to nodelabel.
  - Test basic HA cases and seems correct.
  - Tested new YARN UI . All pages are getting loaded correctly.


- Sunil

On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan  wrote:

Hi folks,

Thanks to the many who helped with this release since Dec 2017 [1].
We've
created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:

http://people.apache.org/~wangda/hadoop-3.1.0-RC1

The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d

The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/
orgapachehadoop-1090/
This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.

3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
include the first class GPU/FPGA support on YARN, Native services,
Support
rich placement constraints in YARN, S3-related enhancements, allow HDFS
block replicas to be provided by an external storage system, etc.

For 3.1.0 RC0 vote discussion, please see [3].

We’d like to use this as a starting release for 3.1.x [1], depending on
how
it goes, get it stabilized and potentially use a 3.1.1 in several weeks
as
the stable release.

We have done testing with a pseudo cluster:
- Ran distributed job.
- GPU scheduling/isolation.
- Placement constraints (intra-application anti-affinity) by using
distributed shell.

My +1 to start.

Best,
Wangda/Vinod

[1]

https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd104b
c9e19e559aedc5ebedb4ca08c8@%3Cyarn-dev.hadoop.apache.org%3E
[2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.0)
AND fixVersion not in (3.0.0, 3.0.0-beta1) AND status = Resolved ORDER
BY
fixVersion ASC
[3]

https://lists.apache.org/thread.html/b3a7dc075b7329fd660f65b48237d7
2d4061f26f83547e41d0983ea6@%3Cyarn-dev.hadoop.apache.org%3E






[jira] [Created] (HADOOP-15334) Upgrade Maven surefire plugin

2018-03-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15334:
--

 Summary: Upgrade Maven surefire plugin
 Key: HADOOP-15334
 URL: https://issues.apache.org/jira/browse/HADOOP-15334
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Recent versions of the surefire plugin suppress summary test execution output 
in quiet mode. This was recently fixed in plugin version 2.21.0 (via 
SUREFIRE-1436).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: About reset branch-3.1 to trunk before release.

2018-03-19 Thread Arpit Agarwal
Thanks Wangda.


On 3/19/18, 11:38 AM, "Wangda Tan"  wrote:

Done JIRA fix version update:

Moved all JIRAs with fixVersion = 3.2.0 to 3.1.0 except following few fixes
(which committed after 49c747ab187d0650143205ba57ca19607ec4c6bd)

YARN-8002. Support NOT_SELF and ALL namespace types for allocation tag.
(Weiwe
i Yang via wangda)

HADOOP-15262. AliyunOSS: move files under a directory in parallel when
rename a directory. Contributed by Jinhu Wu.

MAPREDUCE-7066. TestQueue fails on Java9

YARN-8028. Support authorizeUserAccessToQueue in RMWebServices.
Contributed by Wangda Tan.

YARN-8040. [UI2] New YARN UI webapp does not respect current pathname
for REST api. Contributed by Sunil G.

Thanks,
Wangda

On Mon, Mar 19, 2018 at 11:12 AM, Wangda Tan  wrote:

> Thanks Akira for the additional vote,
>
> With help from Apache Infra Team (Daniel Takamori), we just reset
> branch-3.1 to trunk (SHA: 49c747ab187d0650143205ba57ca19607ec4c6bd). Will
> update JIRA fix version shortly.
>
> - Wangda
>
> On Sun, Mar 18, 2018 at 6:10 PM, Akira Ajisaka  > wrote:
>
>> +1 for resetting branch-3.1.
>>
>> Thanks,
>> Akira
>>
>>
>> On 2018/03/18 12:51, Wangda Tan wrote:
>>
>>> Thanks for sharing your thoughts.
>>>
>>> We have done build and single node cluster deploy / test for the latest
>>> trunk code (commit: 49c747ab187d0650143205ba57ca19607ec4c6bd). Since
>>> there
>>> are no objections, so I will go ahead to do the branch replace.
>>>
>>> Since we don't have force push permission to release branches. I just
>>> filed
>>> https://issues.apache.org/jira/browse/INFRA-16204 to get help from
>>> Apache
>>> infra team.
>>>
>>> Please hold any commits to branch-3.1, will keep this email thread
>>> posted.
>>>
>>> Best,
>>> Wangda
>>>
>>> On Wed, Mar 14, 2018 at 3:14 PM, Vinod Kumar Vavilapalli <
>>> vino...@apache.org
>>>
 wrote:

>>>
>>> I see one new feature: https://issues.apache.org/jira/browse/YARN-7626:
 Allow regular expression matching in container-executor.cfg for devices
 and
 named docker volumes mount.

 There are 21 sub-tasks. There are three feature-type JIRAs in those -
 https://issues.apache.org/jira/browse/YARN-7972,
 https://issues.apache.org/jira/browse/YARN-7891 and
 https://issues.apache.org/jira/browse/YARN-5015. These should be okay -
 not major disrupting features.

 Everything else is either a bug-fix or an improvement so we should be
 good.

  From the list, it doesn't look like resetting will destabilize 3.1, +1
 for
 doing this.

 Thanks
 +Vinod

 On Mar 14, 2018, at 1:54 PM, Wangda Tan  wrote:
>
> Hi mapreduce/yarn/common/hdfs-devs,
>
> As of now, we have all blockers done for 3.1.0 release [1]. The 
release
>
 is running behind schedule due to a few security-related issues.
 Because of
 this and since branch-3.1 is cut 5 weeks before on Feb 8, trunk 3.2 is
 already diverging. There're 64 commits in trunk but not in branch-3.1.
 [2]

>
> I took a quick scan of them, most of them are good fixes which we
> should
>
 bring to 3.1.0 as well. And this can also reduce differences between
 3.2.0
 and 3.1.0 release for less maintenance burden in the future.

>
> Unless anyone objects, we will reset branch-3.1 to trunk in 1-2 days
> and
>
 cut RC after that.

>
> Thoughts?
>
> - Wangda
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in
> (Blocker,
>
 Critical) AND resolution = Unresolved AND "Target Version/s" = 3.1.0
 ORDER
 BY priority DESC

> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> (3.2.0)
>
 AND fixVersion not in (3.1.0)



>>>
>



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: Apache Hadoop 3.0.1 Release plan

2018-02-02 Thread Arpit Agarwal
Hi Aaron/Lei,

Do you plan to roll an RC with an uncommitted fix? That isn't the right 
approach.

This issue has good visibility and enough discussion. If there is a binding 
veto in effect then the change must be abandoned. Else you should be able to 
proceed with committing. However, 3.0.0 must be called out as an abandoned 
release if we commit it.

Regards,
Arpit


On 2/1/18, 3:01 PM, "Lei Xu"  wrote:

Sounds good to me, ATM.

On Thu, Feb 1, 2018 at 2:34 PM, Aaron T. Myers  wrote:
> Hey Anu,
>
> My feeling on HDFS-12990 is that we've discussed it quite a bit already 
and
> it doesn't seem at this point like either side is going to budge. I'm
> certainly happy to have a phone call about it, but I don't expect that 
we'd
> make much progress.
>
> My suggestion is that we simply include the patch posted to HDFS-12990 in
> the 3.0.1 RC and call this issue out clearly in the subsequent VOTE thread
> for the 3.0.1 release. Eddy, are you up for that?
>
> Best,
> Aaron
>
> On Thu, Feb 1, 2018 at 1:13 PM, Lei Xu  wrote:
>>
>> +Xiao
>>
>> My understanding is that we will have this for 3.0.1.   Xiao, could
>> you give your inputs here?
>>
>> On Thu, Feb 1, 2018 at 11:55 AM, Anu Engineer 
>> wrote:
>> > Hi Eddy,
>> >
>> > Thanks for driving this release. Just a quick question, do we have time
>> > to close this issue?
>> > https://issues.apache.org/jira/browse/HDFS-12990
>> >
>> > or are we abandoning it? I believe that this is the last window for us
>> > to fix this issue.
>> >
>> > Should we have a call and get this resolved one way or another?
>> >
>> > Thanks
>> > Anu
>> >
>> > On 2/1/18, 10:51 AM, "Lei Xu"  wrote:
>> >
>> > Hi, All
>> >
>> > I just cut branch-3.0.1 from branch-3.0.  Please make sure all
>> > patches
>> > targeted to 3.0.1 being checked in both branch-3.0 and 
branch-3.0.1.
>> >
>> > Thanks!
>> > Eddy
>> >
>> > On Tue, Jan 9, 2018 at 11:17 AM, Lei Xu  wrote:
>> > > Hi, All
>> > >
>> > > We have released Apache Hadoop 3.0.0 in December [1]. To further
>> > > improve the quality of release, we plan to cut branch-3.0.1 
branch
>> > > tomorrow for the preparation of Apache Hadoop 3.0.1 release. The
>> > focus
>> > > of 3.0.1 will be fixing blockers (3), critical bugs (1) and bug
>> > fixes
>> > > [2].  No new features and improvement should be included.
>> > >
>> > > We plan to cut branch-3.0.1 tomorrow (Jan 10th) and vote for RC 
on
>> > Feb
>> > > 1st, targeting for Feb 9th release.
>> > >
>> > > Please feel free to share your insights.
>> > >
>> > > [1]
>> > https://www.mail-archive.com/general@hadoop.apache.org/msg07757.html
>> > > [2] https://issues.apache.org/jira/issues/?filter=12342842
>> > >
>> > > Best,
>> > > --
>> > > Lei (Eddy) Xu
>> > > Software Engineer, Cloudera
>> >
>> >
>> >
>> > --
>> > Lei (Eddy) Xu
>> > Software Engineer, Cloudera
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >
>> >
>> >
>>
>>
>>
>> --
>> Lei (Eddy) Xu
>> Software Engineer, Cloudera
>>
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>
>



-- 
Lei (Eddy) Xu
Software Engineer, Cloudera

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org





Re: Access to wiki

2018-01-09 Thread Arpit Agarwal
Granted Ajay write access to the confluence wiki.


On 1/9/18, 10:38 AM, "Steve Loughran"  wrote:

Is this for the confluence wiki at 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Home



On 4 Jan 2018, at 20:25, Ajay Kumar 
mailto:ajay.ku...@hortonworks.com>> wrote:

Hi,

Can someone please help me get wiki access?
For one of the jira (HADOOP-14969) I want to add some information about 
configuration of secure Datanodes.

Thanks,
Ajay Kumar
Hortonworks




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


[jira] [Resolved] (HADOOP-15128) TestViewFileSystem tests are broken in trunk

2018-01-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-15128.

Resolution: Not A Problem

Reverted HADOOP-10054, let's make the right fix there.

> TestViewFileSystem tests are broken in trunk
> 
>
> Key: HADOOP-15128
> URL: https://issues.apache.org/jira/browse/HADOOP-15128
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Hanisha Koneru
>
> The fix in Hadoop-10054 seems to have caused a test failure. Please take a 
> look. Thanks [~eyang] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-18 Thread Arpit Agarwal
That makes sense for Beta users but most of our users will be upgrading from a 
previous GA release and the changelog will mislead them. The webpage does not 
mention this is a delta from the beta release.


From: Andrew Wang 
Date: Friday, December 15, 2017 at 10:36 AM
To: Arpit Agarwal 
Cc: general , "common-dev@hadoop.apache.org" 
, "yarn-...@hadoop.apache.org" 
, "mapreduce-...@hadoop.apache.org" 
, "hdfs-...@hadoop.apache.org" 

Subject: Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

Hi Arpit,

If you look at the release announcements, it's made clear that the changelog 
for 3.0.0 is diffed based on beta1. This is important since users need to know 
what's different from the previous 3.0.0-* releases if they're upgrading.

I agree there's additional value to making combined release notes, but it'd be 
something additive rather than replacing what's there.

Best,
Andrew

On Fri, Dec 15, 2017 at 8:27 AM, Arpit Agarwal 
mailto:aagar...@hortonworks.com>> wrote:

Hi Andrew,

Thank you for all the hard work on this release. I was out the last few days 
and didn’t get a chance to evaluate RC1 earlier.

The changelog looks incorrect. E.g. This gives an impression that there are 
just 5 incompatible changes in 3.0.0.
http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html

I assume you only counted 3.0.0 changes in this log excluding alphas/betas. 
However, users shouldn’t have to manually compile incompatibilities by summing 
up a/b release notes. Can we fix the changelog after the fact?




On 12/14/17, 10:45 AM, "Andrew Wang" 
mailto:andrew.w...@cloudera.com>> wrote:

Hi all,

I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
(GA).

3.0.0 GA consists of 302 bug fixes, improvements, and other enhancements
since 3.0.0-beta1. This release marks a point of quality and stability for
the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta releases
are encouraged to upgrade.

Looking back, 3.0.0 GA is the culmination of over a year of work on the
3.0.0 line, starting with 3.0.0-alpha1 which was released in September
2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.

Users are encouraged to read the overview of major changes
<http://hadoop.apache.org/docs/r3.0.0/index.html> in 3.0.0. The GA release
notes

<http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/RELEASENOTES.3.0.0.html>
 and changelog

<http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html>
detail
the changes since 3.0.0-beta1.

The ASF press release provides additional color and highlights some of the
major features:


https://globenewswire.com/news-release/2017/12/14/1261879/0/en/The-Apache-Software-Foundation-Announces-Apache-Hadoop-v3-0-0-General-Availability.html

Let me end by thanking the many, many contributors who helped with this
release line. We've only had three major releases in Hadoop's 10 year
history, and this is our biggest major release ever. It's an incredible
accomplishment for our community, and I'm proud to have worked with all of
you.

Best,
Andrew









Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-15 Thread Arpit Agarwal

Hi Andrew,

Thank you for all the hard work on this release. I was out the last few days 
and didn’t get a chance to evaluate RC1 earlier.

The changelog looks incorrect. E.g. This gives an impression that there are 
just 5 incompatible changes in 3.0.0.
http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html

I assume you only counted 3.0.0 changes in this log excluding alphas/betas. 
However, users shouldn’t have to manually compile incompatibilities by summing 
up a/b release notes. Can we fix the changelog after the fact?




On 12/14/17, 10:45 AM, "Andrew Wang"  wrote:

Hi all,

I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
(GA).

3.0.0 GA consists of 302 bug fixes, improvements, and other enhancements
since 3.0.0-beta1. This release marks a point of quality and stability for
the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta releases
are encouraged to upgrade.

Looking back, 3.0.0 GA is the culmination of over a year of work on the
3.0.0 line, starting with 3.0.0-alpha1 which was released in September
2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.

Users are encouraged to read the overview of major changes
 in 3.0.0. The GA release
notes


 and changelog


detail
the changes since 3.0.0-beta1.

The ASF press release provides additional color and highlights some of the
major features:


https://globenewswire.com/news-release/2017/12/14/1261879/0/en/The-Apache-Software-Foundation-Announces-Apache-Hadoop-v3-0-0-General-Availability.html

Let me end by thanking the many, many contributors who helped with this
release line. We've only had three major releases in Hadoop's 10 year
history, and this is our biggest major release ever. It's an incredible
accomplishment for our community, and I'm proud to have worked with all of
you.

Best,
Andrew









[jira] [Created] (HADOOP-15066) Spurious error stopping secure datanode

2017-11-22 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15066:
--

 Summary: Spurious error stopping secure datanode
 Key: HADOOP-15066
 URL: https://issues.apache.org/jira/browse/HADOOP-15066
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Arpit Agarwal


Looks like there is a spurious error when stopping a secure datanode.

{code}
# hdfs --daemon stop datanode
cat: /var/run/hadoop/hdfs//hadoop-hdfs-root-datanode.pid: No such file or 
directory
WARNING: pid has changed for datanode, skip deleting pid file
cat: /var/run/hadoop/hdfs//hadoop-hdfs-root-datanode.pid: No such file or 
directory
WARNING: daemon pid has changed for datanode, skip deleting daemon pid file
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-20 Thread Arpit Agarwal
Thanks for that proposal Andrew, and for not wrapping up the vote yesterday.

> In terms of downstream testing, we've done extensive
> integration testing with downstreams via the alphas
> and betas, and we have continuous integration running
> at Cloudera against branch-3.0.

Could you please share what kind of downstream testing you have performed and 
with which downstream projects?

Regards,
Arpit


On 11/17/17, 3:27 PM, "Andrew Wang"  wrote:

Hi Arpit,

I agree the timing is not great here, but extending it to meaningfully
avoid the holidays would mean extending it an extra week (e.g. to the
29th). We've been coordinating with ASF PR for that Tuesday, so I'd really,
really like to get the RC out before then.

In terms of downstream testing, we've done extensive integration testing
with downstreams via the alphas and betas, and we have continuous
integration running at Cloudera against branch-3.0. Because of this, I have
more confidence in our integration for 3.0.0 than most Hadoop releases.

Is it meaningful to extend to say, the 21st, which provides for a full week
of voting?

Best,
Andrew

    On Fri, Nov 17, 2017 at 1:27 PM, Arpit Agarwal 
wrote:

> Hi Andrew,
>
> Thank you for your hard work in getting us to this step. This is our first
> major GA release in many years.
>
> I feel a 5-day vote window ending over the weekend before thanksgiving may
> not provide sufficient time to evaluate this RC especially for downstream
> components.
>
> Would you please consider extending the voting deadline until a few days
> after the thanksgiving holiday? It would be a courtesy to our broader
> community and I see no harm in giving everyone a few days to evaluate it
> more thoroughly.
>
> On a lighter note, your deadline is also 4 minutes short of the required 5
> days. :)
>
> Regards,
> Arpit
>
>
>
> On 11/14/17, 1:34 PM, "Andrew Wang"  wrote:
>
> Hi folks,
>
> Thanks as always to the many, many contributors who helped with this
> release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
> available here:
>
> http://people.apache.org/~wang/3.0.0-RC0/
>
> This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.
>
> 3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
> additions include the merge of YARN resource types, API-based
> configuration
> of the CapacityScheduler, and HDFS router-based federation.
>
> I've done my traditional testing with a pseudo cluster and a Pi job.
> My +1
> to start.
>
> Best,
> Andrew
>
>
>




Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-17 Thread Arpit Agarwal
Hi Andrew,

Thank you for your hard work in getting us to this step. This is our first 
major GA release in many years.

I feel a 5-day vote window ending over the weekend before thanksgiving may not 
provide sufficient time to evaluate this RC especially for downstream 
components.

Would you please consider extending the voting deadline until a few days after 
the thanksgiving holiday? It would be a courtesy to our broader community and I 
see no harm in giving everyone a few days to evaluate it more thoroughly.

On a lighter note, your deadline is also 4 minutes short of the required 5 
days. :)

Regards,
Arpit



On 11/14/17, 1:34 PM, "Andrew Wang"  wrote:

Hi folks,

Thanks as always to the many, many contributors who helped with this
release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
available here:

http://people.apache.org/~wang/3.0.0-RC0/

This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.

3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
additions include the merge of YARN resource types, API-based configuration
of the CapacityScheduler, and HDFS router-based federation.

I've done my traditional testing with a pseudo cluster and a Pi job. My +1
to start.

Best,
Andrew




Re: Access to Confluence Wiki

2017-10-31 Thread Arpit Agarwal
Can you please grant me write access too?

Thanks.


On 10/30/17, 11:01 PM, "Akira Ajisaka"  wrote:

Done. Welcome!

-Akira

On 2017/10/28 3:26, Hanisha Koneru wrote:
> Hi,
> 
> Can I please get access to the Confluence Hadoop Wiki. My confluence id 
is “hanishakoneru”.
> 
> Thanks,
> Hanisha
> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org





[jira] [Created] (HADOOP-14287) Compiling trunk with -DskipShade fails

2017-04-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-14287:
--

 Summary: Compiling trunk with -DskipShade fails 
 Key: HADOOP-14287
 URL: https://issues.apache.org/jira/browse/HADOOP-14287
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha3
Reporter: Arpit Agarwal


Get the following errors when compiling trunk with -DskipShade. It succeeds 
with shading.

{code}
[ERROR] COMPILATION ERROR :
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[41,30]
 cannot find symbol
  symbol:   class HdfsConfiguration
  location: package org.apache.hadoop.hdfs
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[45,34]
 cannot find symbol
  symbol:   class WebHdfsConstants
  location: package org.apache.hadoop.hdfs.web
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[71,36]
 cannot find symbol
  symbol:   class HdfsConfiguration
  location: class org.apache.hadoop.example.ITUseMiniCluster
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[85,53]
 cannot access org.apache.hadoop.hdfs.DistributedFileSystem
  class file for org.apache.hadoop.hdfs.DistributedFileSystem not found
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[109,38]
 cannot find symbol
  symbol:   variable WebHdfsConstants
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7880) The Single Node and Cluster Setup docs don't cover HDFS

2017-03-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-7880.
---
Resolution: Not A Problem

This is covered by our docs now, resolving.
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ClusterSetup.html

> The Single Node and Cluster Setup docs don't cover HDFS
> ---
>
> Key: HADOOP-7880
> URL: https://issues.apache.org/jira/browse/HADOOP-7880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>
> The main docs page (http://hadoop.apache.org/common/docs/r0.23.0) only has 
> HDFS docs for federation. Only MR2 is covered in the single node and cluster 
> setup documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14121) Fix occasional BindException in TestNameNodeMetricsLogger

2017-02-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-14121:
--

 Summary: Fix occasional BindException in TestNameNodeMetricsLogger
 Key: HADOOP-14121
 URL: https://issues.apache.org/jira/browse/HADOOP-14121
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


TestNameNodeMetricsLogger occasionally hits BindException even though it uses 
ServerSocketUtil.getPort to get a random port number.

It's better to specify a port number of 0 and let the OS allocate an unused 
port.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-14002.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

Thanks for the review [~asuresh]. Committed this to trunk.

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release cadence and EOL

2017-01-19 Thread Arpit Agarwal
The ASF release policy says releases may not be vetoed [1] so the EOL policy 
sounds unenforceable. Not sure a release cadence is enforceable either since 
Release Managers are volunteers.

1. https://www.apache.org/dev/release.html#approving-a-release



On 1/18/17, 7:06 PM, "Junping Du"  wrote:

+1 on Sangjin's proposal - 
"A minor release line is end-of-lifed 2 years after it is released or there
are 2 newer minor releases, whichever is sooner. The community reserves the
right to extend or shorten the life of a release line if there is a good
reason to do so."

I also noticed Karthik bring up some new proposals - some of them looks 
interesting to me and I have some ideas as well. Karthik, can you bring it out 
in a separated discussion threads so that we can discuss from there?

About Chris Trezzo's question about definition of EOL of hadoop release, I 
think potentially changes could be: 
1. For users of Apache hadoop, they would expect to upgrade to a new 
minor/major releases after EOL of their current release because there is no 
guarantee of new maintenance release.

2. For release effort, apache law claim that committer can volunteer RM for 
any release. With this release EOL proposal passes and written into hadoop 
bylaw, anyone want to call for a release which is EOL then she/he have to 
provide a good reason to community and get voted before to start release 
effort. We don't want to waste community time/resource to verify/vote a narrow 
interested release.

3. About committer's responsibility, I think the bottom line is committer 
should commit patch contributor's target release and her/his own interest 
release which I conservatively agree with Allen's point that this vote doesn't 
change anything. But if a committer want to take care more interest from the 
whole community like most committers are doing today, he/she should understand 
which branches can benefit more people and could skip some EOL release branches 
for backport effort.

About major release EOL, this could be more complicated and I think we 
should discuss separately.

Thanks,

Junping

From: Allen Wittenauer 
Sent: Wednesday, January 18, 2017 3:30 PM
To: Chris Trezzo
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release cadence and EOL

> On Jan 18, 2017, at 11:21 AM, Chris Trezzo  wrote:
>
> Thanks Sangjin for pushing this forward! I have a few questions:

These are great questions, because I know I'm not seeing a whole 
lot of substance in this vote.  The way to EOL software in the open source 
universe is with new releases and aging it out.  If someone wants to be a RE 
for a new branch-1 release, more power to them.  As volunteers to the ASF, 
we're not on the hook to provide much actual support.  This feels more like a 
vendor play than a community one.  But if the PMC want to vote on it, whatever. 
 It won't be first bylaw that doesn't really mean much.

> 1. What is the definition of end-of-life for a release in the hadoop
> project? My current understanding is as follows: When a release line
> reaches end-of-life, there are no more planned releases for that line.
> Committers are no longer responsible for back-porting bug fixes to the 
line
> (including fixed security vulnerabilities) and it is essentially
> unmaintained.

Just a point of clarification.  There is no policy that says that 
committers must back port.  It's up to the individual committers to push a 
change onto any particular branch. Therefore, this vote doesn't really change 
anything in terms of committer responsibilities here.

> 2. How do major releases affect the end-of-life proposal? For example, how
> does a new minor release in the next major release affect the end-of-life
> of minor releases in a previous major release? Is it possible to have a
> maintained 2.x release if there is a 3.3 release?

I'm looking forward to seeing this answer too, given that 2.7.0 is 
probably past the 2 year mark, 2.8.0 has seemingly been in a holding pattern 
for over a year, and the next 3.0.0 alpha should be RSN

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org






[jira] [Resolved] (HADOOP-6751) hadoop daemonlog does not work from command line with security enabled

2016-12-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-6751.
---
Resolution: Duplicate

> hadoop daemonlog does not work from command line with security enabled
> --
>
> Key: HADOOP-6751
> URL: https://issues.apache.org/jira/browse/HADOOP-6751
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>
> daemonlog command line is not working with security enabled.
> We need to support both browser interface and command line with security 
> enabled for daemonlog.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-6751) hadoop daemonlog does not work from command line with security enabled

2016-12-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-6751:
---

> hadoop daemonlog does not work from command line with security enabled
> --
>
> Key: HADOOP-6751
> URL: https://issues.apache.org/jira/browse/HADOOP-6751
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>
> daemonlog command line is not working with security enabled.
> We need to support both browser interface and command line with security 
> enabled for daemonlog.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-13737:

  Assignee: Arpit Agarwal

Resolved the wrong issue!

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>    Reporter: Arpit Agarwal
>    Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13738) DiskChecker should perform some file IO

2016-10-19 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13738:
--

 Summary: DiskChecker should perform some file IO
 Key: HADOOP-13738
 URL: https://issues.apache.org/jira/browse/HADOOP-13738
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


DiskChecker can fail to detect total disk/controller failures indefinitely. We 
have seen this in real clusters. DiskChecker performs simple permissions-based 
checks on directories which do not guarantee that any disk IO will be attempted.

A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13737:
--

 Summary: Cleanup DiskChecker interface
 Key: HADOOP-13737
 URL: https://issues.apache.org/jira/browse/HADOOP-13737
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13668) Make InstrumentedLock require ReentrantLock

2016-09-28 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13668:
--

 Summary: Make InstrumentedLock require ReentrantLock
 Key: HADOOP-13668
 URL: https://issues.apache.org/jira/browse/HADOOP-13668
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Make InstrumentedLock use ReentrantLock instead of Lock, so nested 
acquire/release calls can be instrumented correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13467) Shell#getSignalKillCommand should use the bash builtin on Linux

2016-08-03 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13467:
--

 Summary: Shell#getSignalKillCommand should use the bash builtin on 
Linux
 Key: HADOOP-13467
 URL: https://issues.apache.org/jira/browse/HADOOP-13467
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HADOOP-13434 inadvertently undid the fix made in HADOOP-12441.

The use of the bash builtin for kill was intentional, so let's restore that 
behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13434) Add quoting to Shell class

2016-08-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-13434:


Reopening to attach branch-2.7 patch.

> Add quoting to Shell class
> --
>
> Key: HADOOP-13434
> URL: https://issues.apache.org/jira/browse/HADOOP-13434
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 2.8.0
>
> Attachments: HADOOP-13434.patch, HADOOP-13434.patch, 
> HADOOP-13434.patch
>
>
> The Shell class makes assumptions that the parameters won't have spaces or 
> other special characters, even when it invokes bash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-02 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13457:
--

 Summary: Remove hardcoded absolute path for shell executable
 Key: HADOOP-13457
 URL: https://issues.apache.org/jira/browse/HADOOP-13457
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Shell.java has a hardcoded path to /bin/bash which is not correct on all 
platforms. 

Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13424) namenode connect time out in cluster with 65 machiones

2016-07-25 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-13424.

Resolution: Invalid

[~wanglaichao] Jira is not a support channel. Please use u...@hadoop.apache.org.

> namenode connect time out in cluster with 65 machiones
> --
>
> Key: HADOOP-13424
> URL: https://issues.apache.org/jira/browse/HADOOP-13424
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.4.1
> Environment: hadoop 2.4.1
>Reporter: wanglaichao
>
> Befor out cluster has 50 nodes ,it runs ok. Recently we add 15 node ,it 
> always reports errors with connectint  timeout.Who can help me ,thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-12903) Add IPC Server support for suppressing exceptions by type, suppress 'server too busy' messages

2016-03-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12903:
--

 Summary: Add IPC Server support for suppressing exceptions by 
type, suppress 'server too busy' messages
 Key: HADOOP-12903
 URL: https://issues.apache.org/jira/browse/HADOOP-12903
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.7.2
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HADOOP-10597 added support for RPC congestion control by sending retriable 
'server too busy' exceptions to clients. 

However every backoff results in a log message. We've seen these log messages 
slow down the NameNode.
{code}
2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 127.0.0.1 threw exception 
[org.apache.hadoop.ipc.RetriableException: Server is too busy.]
{code}

We already have a metric that tracks the number of backoff events. This log 
message adds nothing useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


'Target Version' field missing in Jira

2016-02-12 Thread Arpit Agarwal
Is it just me or has the Target Version/s field gone missing from Apache Hadoop 
Jira? I don't recall any recent discussion about it.


[jira] [Created] (HADOOP-12746) ReconfigurableBase should update the cached configuration consistently

2016-01-27 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12746:
--

 Summary: ReconfigurableBase should update the cached configuration 
consistently
 Key: HADOOP-12746
 URL: https://issues.apache.org/jira/browse/HADOOP-12746
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


{{ReconfigurableBase}} does not always update the cached configuration after a 
property is reconfigured.

The older {{#reconfigureProperty}} does so however {{ReconfigurationThread}} 
does not.

See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC2

2016-01-18 Thread Arpit Agarwal
+1 (binding)

- verified signatures
- Installed pseudo-cluster from binary distribution and ran example MapReduce 
jobs
- Built from sources and reran tests on pseudo-cluster




On 1/14/16, 8:57 PM, "Vinod Kumar Vavilapalli"  wrote:

>Hi all,
>
>I've created an updated release candidate RC2 for Apache Hadoop 2.7.2.
>
>As discussed before, this is the next maintenance release to follow up 2.7.1.
>
>The RC is available for validation at: 
>http://people.apache.org/~vinodkv/hadoop-2.7.2-RC2/
>
>The RC tag in git is: release-2.7.2-RC2
>
>The maven artifacts are available via repository.apache.org 
> at 
>https://repository.apache.org/content/repositories/orgapachehadoop-1027 
>
>
>The release-notes are inside the tar-balls at location 
>hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
>this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC2/releasenotes.html 
> for 
>your quick perusal.
>
>As you may have noted,
> - I terminated the RC1 related voting thread after finding out that we didn’t 
> have a bunch of patches that are already in the released 2.6.3 version. After 
> a brief discussion, we decided to keep the parallel 2.6.x and 2.7.x releases 
> incremental, see [4] for this discussion.
> - The RC0 related voting thread got halted due to some critical issues. It 
> took a while again for getting all those blockers out of the way. See the 
> previous voting thread [3] for details.
> - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
> bit. This release's related discussion threads are linked below: [1] and [2].
>
>Please try the release and vote; the vote will run for the usual 5 days.
>
>Thanks,
>Vinod
>
>[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 
>
>[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 
>
>[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
>http://markmail.org/message/5txhvr2qdiqglrwc 
>
>[4] Retracted [VOTE] Release Apache Hadoop 2.7.2 RC1: 
>http://markmail.org/thread/n7ljbsnquihn3wlw


[jira] [Created] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2015-12-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12665:
--

 Summary: Document hadoop.security.token.service.use_ip
 Key: HADOOP-12665
 URL: https://issues.apache.org/jira/browse/HADOOP-12665
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.8.0
Reporter: Arpit Agarwal


{{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Arpit Agarwal
Thanks for putting together this release Vinod.

+1 (binding)

 - Verified signatures
 - Started pseudo-cluster with binary distribution, verified git commit ID
 - Built sources, deployed pseudo-cluster and ran example map reduce jobs, 
DistributedShell, HDFS commands.


PS: Extra file hadoop-2.7.2-RC1-src.tar.gz.md5?





On 12/16/15, 6:49 PM, "Vinod Kumar Vavilapalli"  wrote:

>Hi all,
>
>I've created a release candidate RC1 for Apache Hadoop 2.7.2.
>
>As discussed before, this is the next maintenance release to follow up 2.7.1.
>
>The RC is available for validation at: 
>http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ 
>
>
>The RC tag in git is: release-2.7.2-RC1
>
>The maven artifacts are available via repository.apache.org 
> at 
>https://repository.apache.org/content/repositories/orgapachehadoop-1026/ 
>
>
>The release-notes are inside the tar-balls at location 
>hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
>this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html 
>for quick perusal.
>
>As you may have noted,
> - The RC0 related voting thread got halted due to some critical issues. It 
> took a while again for getting all those blockers out of the way. See the 
> previous voting thread [3] for details.
> - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
> bit. This release's related discussion threads are linked below: [1] and [2].
>
>Please try the release and vote; the vote will run for the usual 5 days.
>
>Thanks,
>Vinod
>
>[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 
>
>[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 
>
>[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
>http://markmail.org/message/5txhvr2qdiqglrwc
>


[jira] [Created] (HADOOP-12664) UGI auto-renewer does not verify kinit availability during initialization

2015-12-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12664:
--

 Summary: UGI auto-renewer does not verify kinit availability 
during initialization
 Key: HADOOP-12664
 URL: https://issues.apache.org/jira/browse/HADOOP-12664
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Priority: Minor


UGI auto-renewer does not verify that {{hadoop.kerberos.kinit.command}} is in 
the path during initialization. If not available, the auto-renewal thread will 
hit an error during TGT renewal. We recently saw a case where it manifests as 
transient errors during client program execution which can be hard to track 
down without UGI logging.

It seems like {{kinit}} availability should be verified during initialization 
to make the behavior more predictable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Arpit Agarwal
+1 (binding)

- Verified signatures for source and binary distributions
- Built jars from source with java 1.7.0_79
- Deployed single-node pseudo-cluster
- Ran example map reduce jobs
- Ran hdfs admin commands, verified NN web UI shows expected usages



On 12/11/15, 4:16 PM, "Junping Du"  wrote:

>
>Hi all developers in hadoop community,
>   I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next 
> maintenance release to follow up 2.6.2.) according to email thread of release 
> plan 2.6.3 [1]. Sorry for this RC coming a bit late as several blocker issues 
> were getting committed until yesterday. Below is the details:
>
>The RC is available for validation at:
>*http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
>*
>
>The RC tag in git is: release-2.6.3-RC0
>
>The maven artifacts are staged via repository.apache.org at:
>*https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
>*
>
>You can find my public key at:
>http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>
>Please try the release and vote. The vote will run for the usual 5 days.
>
>Thanks and happy weekend!
>
>
>Cheers,
>
>Junping
>
>
>[1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y
>


Re: Github integration for Hadoop

2015-10-29 Thread Arpit Agarwal
On 10/29/15, 5:14 PM, "Andrew Wang"  wrote:



>On Thu, Oct 29, 2015 at 4:58 PM, Arpit Agarwal 
>wrote:
>
>> On 10/29/15, 3:48 PM, "Andrew Wang"  wrote:
>>
>>
>> > If we pushed for it, I think it could happen.
>>
>> Gerrit support is a complete unknown. The community response to date
>> supports Github integration. Github will appeal to new contributors as
>> their Github profiles will reflect their work. I'd be interested in hearing
>> more contributor opinions.
>>
>> Owen said above that he was proposing using github as a review tool, not
>for code integration. So contributors wouldn't have anything showing up on
>their github profiles, since we aren't directly taking PRs.
>
>However, if we were to use GH for integration, it would be with the
>auto-squash to avoid the merge commit. Would this preserve the correct
>attribution?

The original mail said pull request integration. Unless infra is planning on 
Gerrit integration soon it's not a practical alternative.


>>
>> > If that's the case, we should also look at review alternatives
>> > like RB and Crucible.
>>
>> Okay by me if the community consensus supports one of them. The fact that
>> they exist but no one uses them is not a ringing endorsement.
>>
>> HBase uses reviewboard, as do I'm sure other Apache projects.
>reviews.apache.org existed before we had github integration. I've used RB a
>fair bit, and don't mind it.

I could not get RB working with the Hadoop sub-projects. Would you be willing 
to try it out on a Hadoop/HDFS Jira since you have experience with it?


Re: Github integration for Hadoop

2015-10-29 Thread Arpit Agarwal
On 10/29/15, 3:48 PM, "Andrew Wang"  wrote:


> If we pushed for it, I think it could happen. 

Gerrit support is a complete unknown. The community response to date supports 
Github integration. Github will appeal to new contributors as their Github 
profiles will reflect their work. I'd be interested in hearing more contributor 
opinions.



> If that's the case, we should also look at review alternatives
> like RB and Crucible.

Okay by me if the community consensus supports one of them. The fact that they 
exist but no one uses them is not a ringing endorsement.



Re: Github integration for Hadoop

2015-10-29 Thread Arpit Agarwal
+1, thanks for proposing it.





On 10/29/15, 10:47 AM, "Owen O'Malley"  wrote:

>All,
>   For code & patch review, many of the newer projects are using the Github
>pull request integration. You can read about it here:
>
>https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
>
>It basically lets you:
>* have mirroring between comments on pull requests and jira
>* lets you close pull requests
>* have mirroring between pull request comments and the Apache mail lists
>
>Thoughts?
>.. Owen


[jira] [Created] (HADOOP-12522) Simplify adding NN service RPC port to an existing HA cluster with ZKFCs

2015-10-27 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12522:
--

 Summary: Simplify adding NN service RPC port to an existing HA 
cluster with ZKFCs
 Key: HADOOP-12522
 URL: https://issues.apache.org/jira/browse/HADOOP-12522
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.7.1
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


ZKFCs fail the following check in {{DFSZKFailoverController#dataToTarget}} if 
an NN service RPC port is added to an existing cluster.

{code}
  protected HAServiceTarget dataToTarget(byte[] data) {
...
if (!addressFromProtobuf.equals(ret.getAddress())) {
  throw new RuntimeException("Mismatched address stored in ZK for " +
  ret + ": Stored protobuf was " + proto + ", address from our own " +
  "configuration for this NameNode was " + ret.getAddress());
}

{code}

The NN address stored in the znode had the common client+service RPC port 
number whereas the configuration now returns an address with the service RPC 
port. The workaround is to reformat the ZKFC state in ZK with {{hdfs zkfc 
-formatZK}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2015-08-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-10571:

  Assignee: (was: Arpit Agarwal)

Thanks Steve. The original patch is quite out of date. I'll post an updated 
patch if I get some time. Leaving unassigned for now.

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>    Reporter: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10571.01.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2015-08-25 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10571.

Resolution: Won't Fix

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>    Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10571.01.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12272) Refactor ipc.Server and implementations to reduce constructor bloat

2015-07-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12272:
--

 Summary: Refactor ipc.Server and implementations to reduce 
constructor bloat
 Key: HADOOP-12272
 URL: https://issues.apache.org/jira/browse/HADOOP-12272
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Arpit Agarwal


{{ipc.Server}} and its implementations have constructors taking large number of 
parameters. This code can be simplified quite a bit by just moving RPC.Builder 
to the Server class and passing the builder object to constructors.

The refactoring should be safe based on the class annotations but need to 
confirm no dependent components will break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12250) Enable RPC Congestion control by default

2015-07-17 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12250:
--

 Summary: Enable RPC Congestion control by default
 Key: HADOOP-12250
 URL: https://issues.apache.org/jira/browse/HADOOP-12250
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


We propose enabling RPC congestion control introduced by HADOOP-10597 by 
default. We enabled it on a couple of large clusters a few weeks ago and it has 
helped keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12212) Hi, I am trying to start the namenode but it keeps showing: Failed to start namenode. java.net.BindException: Address already in use

2015-07-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-12212.

  Resolution: Auto Closed
Target Version/s:   (was: 2.7.0)

Hi Joel, I am closing this. You probably want to your setup questions to 
u...@hadoop.apache.org. Thanks.

> Hi, I am trying to start the namenode but it keeps showing: Failed to start 
> namenode. java.net.BindException: Address already in use
> 
>
> Key: HADOOP-12212
> URL: https://issues.apache.org/jira/browse/HADOOP-12212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.7.0
> Environment: Ubuntu 14.04 trusty
>Reporter: Joel
>  Labels: hadoop, hdfs, namenode
>
> Hi, I am trying to start the namenode but it keeps showing: Failed to start 
> namenode. java.net.BindException: Address already in use;. netstat -a | grep 
> 9000 returns 
> tcp0  0 *:9000  *:* LISTEN
>  
> tcp6   0  0 [::]:9000   [::]:*  LISTEN 
> Is this normal or do I need to kill one of the processes?
> The hdfs-site.xml is given below:  
> dfs.replication1
> dfs.namenode.name.dir
> file:///usr/local/hdfs/namenode
> dfs.datanode.data.dir
> file:///usr/local/hdfs/datanode   
> namenode logs are given below:
> --
> 2015-07-10 00:27:02,513 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> registered UNIX signal handlers for [TERM, HUP, INT]
> 2015-07-10 00:27:02,538 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> createNameNode []
> 2015-07-10 00:27:07,549 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
> loaded properties from hadoop-metrics2.properties
> 2015-07-10 00:27:09,284 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period 
> at 10 second(s).
> 2015-07-10 00:27:09,285 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system 
> started
> 2015-07-10 00:27:09,339 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> fs.defaultFS is hdfs://localhost:9000
> 2015-07-10 00:27:09,340 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> Clients are to use localhost:9000 to access this namenode/service.
> 2015-07-10 00:27:12,475 WARN org.apache.hadoop.util.NativeCodeLoader: Unable 
> to load native-hadoop library for your platform... using builtin-java classes 
> where applicable
> 2015-07-10 00:27:16,632 INFO org.apache.hadoop.hdfs.DFSUtil: Starting 
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-07-10 00:27:17,491 INFO org.mortbay.log: Logging to 
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
> org.mortbay.log.Slf4jLog
> 2015-07-10 00:27:17,702 INFO 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable 
> to initialize FileSignerSecretProvider, falling back to use random secrets.
> 2015-07-10 00:27:17,876 INFO org.apache.hadoop.http.HttpRequestLog: Http 
> request log for http.requests.namenode is not defined
> 2015-07-10 00:27:17,941 INFO org.apache.hadoop.http.HttpServer2: Added global 
> filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter 
> static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context hdfs
> 2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter 
> static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context static
> 2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter 
> static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context logs
> 2015-07-10 00:27:18,441 INFO org.apache.hadoop.http.HttpServer2: Added filter 
> 'org.apache.hadoop.hdfs.web.AuthFilter' 
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-07-10 00:27:18,525 INFO org.apache.hadoop.http.HttpServer2: 
> addJerseyResourcePackage: 
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
>  pathSpec=/webhdfs/v1/*
> 2015-07-10 00:27:18,747 INFO org.apache.hadoop.http.HttpServer2: Jetty bound 
> to port 50070
> 2015-07-10 00:27:18,760 INFO org.mortbay.log: jetty-6.1.26
> 2015-07-10 00:27:20,832 INFO org.mortbay.log: Started 
> HttpSe

[jira] [Resolved] (HADOOP-12179) Test Jira, please ignore

2015-07-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-12179.

Resolution: Pending Closed

> Test Jira, please ignore
> 
>
> Key: HADOOP-12179
> URL: https://issues.apache.org/jira/browse/HADOOP-12179
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>    Reporter: Arpit Agarwal
>    Assignee: Arpit Agarwal
> Attachments: HADOOP-12179.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12179) Test Jira, please ignore

2015-07-02 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12179:
--

 Summary: Test Jira, please ignore
 Key: HADOOP-12179
 URL: https://issues.apache.org/jira/browse/HADOOP-12179
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-07-01 Thread Arpit Agarwal
Vinod, thanks for putting together this release.



+1 (binding)

 - Verified signatures
 - Installed binary release on Centos 6 pseudo cluster
* Copied files in and out of HDFS using the shell
* Mounted HDFS via NFS and copied a 10GB file in and out over NFS
* Ran example MapReduce jobs
 - Deployed pseudo cluster from sources on Centos 6, verified
   native bits
 - Deployed pseudo cluster from sources on Windows 2008 R2, verified 
   native bits and ran example MR jobs

Arpit


On 6/29/15, 1:45 AM, "Vinod Kumar Vavilapalli"  wrote:

>Hi all,
>
>I've created a release candidate RC0 for Apache Hadoop 2.7.1.
>
>As discussed before, this is the next stable release to follow up 2.6.0,
>and the first stable one in the 2.7.x line.
>
>The RC is available for validation at:
>*http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
>*
>
>The RC tag in git is: release-2.7.1-RC0
>
>The maven artifacts are available via repository.apache.org at
>*https://repository.apache.org/content/repositories/orgapachehadoop-1019/
>*
>
>Please try the release and vote; the vote will run for the usual 5 days.
>
>Thanks,
>Vinod
>
>PS: It took 2 months instead of the planned [1] 2 weeks in getting this
>release out: post-mortem in a separate thread.
>
>[1]: A 2.7.1 release to follow up 2.7.0
>http://markmail.org/thread/zwzze6cqqgwq4rmw


[jira] [Created] (HADOOP-12163) Add xattr APIs to the FileSystem specification

2015-06-30 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12163:
--

 Summary: Add xattr APIs to the FileSystem specification
 Key: HADOOP-12163
 URL: https://issues.apache.org/jira/browse/HADOOP-12163
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arpit Agarwal


The following ACL APIs should be added to the [FileSystem 
specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
# modifyAclEntries
# removeAclEntries
# removeDefaultAcl
# removeAcl
# setAcl
# getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >