Re: adding contributor roles timing out again

2016-08-18 Thread Vinod Kumar Vavilapalli
It happens to me too on both Firefox / Chrome.

+Vinod

> On Aug 18, 2016, at 8:39 AM, Chris Nauroth  wrote:
> 
> It’s odd that Firefox didn’t work for you.  My standard workaround is to use 
> Firefox, and that’s what I just did successfully for shenyinjie.
> 
> It’s quite mysterious to me that this problem would be browser-specific at 
> all though.
> 
> --Chris Nauroth
> 
> On 8/18/16, 6:53 AM, "Steve Loughran"  wrote:
> 
> 
>I'm trying to add a new contributor, "shenyinjie", but I'm getting the 
> couldn't connect to server message; tried on chrome and firefox, and tried to 
> paste the username in rather than rely on any popup completion.
> 
>no joy: has anyone else succeeded at this recently? 
> 
> 
> 
> 
>-
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 
> 
> 
> 
> B�CB��[��X��ܚX�KK[XZ[���[[ۋY]�][��X��ܚX�PY���\X�K�ܙ�B��܈Y][ۘ[��[X[��K[XZ[���[[ۋY]�Z[Y���\X�K�ܙ�B


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: adding contributor roles timing out again

2016-08-18 Thread Ray Chiang
I just added someone two days ago for MAPREDUCE-6751.  My machine is OS 
X El Capitan running Chrome 51.0.2704.103.


-Ray

On 8/18/16 6:53 AM, Steve Loughran wrote:

I'm trying to add a new contributor, "shenyinjie", but I'm getting the couldn't 
connect to server message; tried on chrome and firefox, and tried to paste the username 
in rather than rely on any popup completion.

no joy: has anyone else succeeded at this recently?




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC2

2016-08-18 Thread Junping Du
Thanks Vinod for creating new RC for 2.7.3 release.

+1 (binding) based on following verifications:

- Download src and binary tar ball and verify signature (gpg --verify).

- Build from source Java 1.8.0_31-b13 on Mac native successfully.

- Build from source with Java 1.7.0_79-b15 on Ubuntu VM successfully.

- Deploy a pseudo cluster with running some simple MR jobs, like: sleep, pi, 
teragen/terasort, etc. All finished successfully.

- Check RM/NM web UI and go through some pages like: Scheduler, Nodes, 
Application, etc. Everything works correct.

- Verify basic yarn CLIs, check version (for hadoop and yarn), classpath, 
application list, etc. all seems to work fine.

Thanks,

Junping


From: Vinod Kumar Vavilapalli 
Sent: Thursday, August 18, 2016 3:05 AM
To: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Cc: Vinod Kumar Vavilapalli
Subject: [VOTE] Release Apache Hadoop 2.7.3 RC2

Hi all,

I've created a new release candidate RC2 for Apache Hadoop 2.7.3.

As discussed before, this is the next maintenance release to follow up 2.7.2.

The RC is available for validation at: 
http://home.apache.org/~vinodkv/hadoop-2.7.3-RC2/ 


The RC tag in git is: release-2.7.3-RC2

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1046 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://home.apache.org/~vinodkv/hadoop-2.7.3-RC2/releasenotes.html 
 for your 
quick perusal.

As you may have noted,
 - few issues with RC0 forced a RC1 [1]
 - few more issues with RC1 forced a RC2 [2]
 - a very long fix-cycle for the License & Notice issues (HADOOP-12893) caused 
2.7.3 (along with every other Hadoop release) to slip by quite a bit. This 
release's related discussion thread is linked below: [3].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1] [VOTE] Release Apache Hadoop 2.7.3 RC0: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106 

[2] [VOTE] Release Apache Hadoop 2.7.3 RC1: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg26336.html 

[3] 2.7.3 release plan: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-08-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13518:
---

 Summary: backport HADOOP-9258 to branch-2
 Key: HADOOP-13518
 URL: https://issues.apache.org/jira/browse/HADOOP-13518
 Project: Hadoop Common
  Issue Type: Task
  Components: fs, fs/s3, test
Affects Versions: 2.9.0
Reporter: Steve Loughran
Assignee: Steve Loughran


I've just realised that HADOOP-9228 was never backported to branch 2. It went 
in to branch 1, and into trunk, but not in the bit in the middle.

It adds
-more fs contract tests
-s3 and s3n rename don't let you rename under yourself (and delete)

I'm going to try to create a patch for this, though it'll be tricky given how 
things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13517) TestS3NContractRootDir.testRecursiveRootListing failing

2016-08-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13517:
---

 Summary: TestS3NContractRootDir.testRecursiveRootListing failing
 Key: HADOOP-13517
 URL: https://issues.apache.org/jira/browse/HADOOP-13517
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


while doing s3a tests against trunk, one of the S3n tests, 
{{TestS3NContractRootDir.testRecursiveRootListing}} failed. 

This may be a failure of recursive listing of an empty root directory; it's 
transient because of deletion inconsistencies means the problem doesn't
always surface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-18 Thread Kuhu Shukla
Hi All,
Thank you for all the inputs on HDFS-9395. I have opened HDFS-10776 to discuss 
the modifications needed for audit logging to be consistent and comprehensive. 
We can move this discussion to the new JIRA.
Appreciate the support.
Regards,Kuhu Shukla 

On Thursday, August 18, 2016 12:04 PM, Chris Nauroth 
 wrote:
 

 Andrew, thanks for adding your perspective on this.

What is a realistic strategy for us to evolve the HDFS audit log in a 
backward-compatible way?  If the API is essentially any form of ad-hoc 
scripting, then for any proposed audit log format change, I can find a reason 
to veto it on grounds of backward incompatibility.

- I can’t add a new field on the end, because that would break an awk script 
that uses $NF expecting to find a specific field.
- I can’t prepend a new field, because that would break a "cut -f1" expecting 
to find the timestamp.
- HDFS can’t add any new features, because someone might have written a 
script that does "exit 1" if it finds an unexpected RPC in the "cmd=" field.
- Hadoop is not allowed to add full IPv6 support, because someone might have 
written a script that looks at the "ip=" field and parses it by IPv4 syntax.

On the CLI, a potential solution for evolving the output is to preserve the old 
format by default and only enable the new format if the user explicitly passes 
a new argument.  What should we do for the audit log?  Configuration flags in 
hdfs-site.xml?  (That of course adds its own brand of complexity.)

I’m particularly interested to hear potential solutions from people like 
Andrew and Allen who have been most vocal about the need for a stable format.  
Without a solution, this unfortunately devolves into the format being frozen 
within a major release line.

We could benefit from getting a patch on the compatibility doc that addresses 
the HDFS audit log specifically. 

--Chris Nauroth

On 8/18/16, 8:47 AM, "Andrew Purtell"  wrote:

    An incompatible APIs change is developer unfriendly. An incompatible 
behavioral change is operator unfriendly. Historically, one dimension of 
incompatibility has had a lot more mindshare than the other. It's great that 
this might be changing for the better. 
    
    Where I work when we move from one Hadoop 2.x minor to another we always 
spend time updating our deployment plans, alerting, log scraping, and related 
things due to changes. Some are debatable as if qualifying for the 
'incompatible' designation. I think the audit logging change that triggered 
this discussion is a good example of one that does. If you want to audit HDFS 
actions those log emissions are your API. (Inotify doesn't offer access control 
events.) One has to code regular expressions for parsing them and reverse 
engineer under what circumstances an audit line is emitted so you can make 
assumptions about what transpired. Change either and you might break someone's 
automation for meeting industry or legal compliance obligations. Not a trivial 
matter. If you don't operate Hadoop in production you might not realize the 
implications of such a change. Glad to see Hadoop has community diversity to 
recognize it in some cases. 
    
    > On Aug 18, 2016, at 6:57 AM, Junping Du  wrote:
    > 
    > I think Allen's previous comments are very misleading. 
    > In my understanding, only incompatible API (RPC, CLIs, WebService, etc.) 
shouldn't land on branch-2, but other incompatible behaviors (logs, audit-log, 
daemon's restart, etc.) should get flexible for landing. Otherwise, how could 
52 issues ( https://s.apache.org/xJk5) marked with incompatible-changes could 
get landed on branch-2 after 2.2.0 release? Most of them are already released. 
    > 
    > Thanks,
    > 
    > Junping
    > 
    > From: Vinod Kumar Vavilapalli 
    > Sent: Wednesday, August 17, 2016 9:29 PM
    > To: Allen Wittenauer
    > Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
    > Subject: Re: [VOTE] Release Apache Hadoop 2.7.3 RC1
    > 
    > I always look at CHANGES.txt entries for incompatible-changes and this 
JIRA obviously wasn’t there.
    > 
    > Anyways, this shouldn’t be in any of branch-2.* as committers there 
clearly mentioned that this is an incompatible change.
    > 
    > I am reverting the patch from branch-2* .
    > 
    > Thanks
    > +Vinod
    > 
    >> On Aug 16, 2016, at 9:29 PM, Allen Wittenauer 
 wrote:
    >> 
    >> 
    >> 
    >> -1
    >> 
    >> HDFS-9395 is an incompatible change:
    >> 
    >> a) Why is not marked as such in the changes file?
    >> b) Why is an incompatible change in a micro release, much less a minor?
    >> c) Where is the release note for this change?
    >> 
    >> 
    >>> On Aug 12, 2016, at 9:45 AM, Vinod Kumar Vavilapalli 
 wrote:
    >>> 
    >>> Hi all,
    >>> 
    >>> I've created a release candidate RC1 for Apache Hadoop 2.7.3.
    >>> 
    >>> As discu

Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-18 Thread Andrew Purtell
>
What is a realistic strategy for us to evolve the HDFS audit log in a
backward-compatible way?  If the API is essentially any form of ad-hoc
scripting, then for any proposed audit log format change, I can find a
reason to veto it on grounds of backward incompatibility.

Yeah when log scraping is the only way at information, then the API surface
expands to cover all manner of ad-hoc scripting.

Not sure moving away from emitting audit information in log lines would be
operator friendly. That's a tough one. Just about everything in the
ecosystem emits audit information as log lines. If Hadoop switches strategy
to become a one-off doing something different this would be painful.

Assuming log lines will be the way we continue to receive audit events from
Hadoop/HDFS, please consider freezing any changes to audit logging today,
develop a formal specification, add the specification to documentation, and
then take care to not break the specification between releases. Because
audit logging from the NN comes from low level places in FSNameSystem this
is going to constrain maintenance and refactor of that and related code, so
with my software maintainer hat on I feel your pain in advance. You'll want
to hash out what level of compatibility you'd like to offer. I'd recommend
only changing on major releases.

On Thu, Aug 18, 2016 at 10:04 AM, Chris Nauroth 
wrote:

> Andrew, thanks for adding your perspective on this.
>
> ​​
> What is a realistic strategy for us to evolve the HDFS audit log in a
> backward-compatible way?  If the API is essentially any form of ad-hoc
> scripting, then for any proposed audit log format change, I can find a
> reason to veto it on grounds of backward incompatibility.
>
> - I can’t add a new field on the end, because that would break an awk
> script that uses $NF expecting to find a specific field.
> - I can’t prepend a new field, because that would break a "cut -f1"
> expecting to find the timestamp.
> - HDFS can’t add any new features, because someone might have written a
> script that does "exit 1" if it finds an unexpected RPC in the "cmd=" field.
> - Hadoop is not allowed to add full IPv6 support, because someone might
> have written a script that looks at the "ip=" field and parses it by IPv4
> syntax.
>
> On the CLI, a potential solution for evolving the output is to preserve
> the old format by default and only enable the new format if the user
> explicitly passes a new argument.  What should we do for the audit log?
> Configuration flags in hdfs-site.xml?  (That of course adds its own brand
> of complexity.)
>
> ​​
> I’m particularly interested to hear potential solutions from people like
> Andrew and Allen who have been most vocal about the need for a stable
> format.  Without a solution, this unfortunately devolves into the format
> being frozen within a major release line.
>
> We could benefit from getting a patch on the compatibility doc that
> addresses the HDFS audit log specifically.
>
> --Chris Nauroth
>
> On 8/18/16, 8:47 AM, "Andrew Purtell"  wrote:
>
> An incompatible APIs change is developer unfriendly. An incompatible
> behavioral change is operator unfriendly. Historically, one dimension of
> incompatibility has had a lot more mindshare than the other. It's great
> that this might be changing for the better.
>
> Where I work when we move from one Hadoop 2.x minor to another we
> always spend time updating our deployment plans, alerting, log scraping,
> and related things due to changes. Some are debatable as if qualifying for
> the 'incompatible' designation. I think the audit logging change that
> triggered this discussion is a good example of one that does. If you want
> to audit HDFS actions those log emissions are your API. (Inotify doesn't
> offer access control events.) One has to code regular expressions for
> parsing them and reverse engineer under what circumstances an audit line is
> emitted so you can make assumptions about what transpired. Change either
> and you might break someone's automation for meeting industry or legal
> compliance obligations. Not a trivial matter. If you don't operate Hadoop
> in production you might not realize the implications of such a change. Glad
> to see Hadoop has community diversity to recognize it in some cases.
>
> > On Aug 18, 2016, at 6:57 AM, Junping Du  wrote:
> >
> > I think Allen's previous comments are very misleading.
> > In my understanding, only incompatible API (RPC, CLIs, WebService,
> etc.) shouldn't land on branch-2, but other incompatible behaviors (logs,
> audit-log, daemon's restart, etc.) should get flexible for landing.
> Otherwise, how could 52 issues ( https://s.apache.org/xJk5) marked with
> incompatible-changes could get landed on branch-2 after 2.2.0 release? Most
> of them are already released.
> >
> > Thanks,
> >
> > Junping
> > 
> > From: Vinod Kumar Vavilapalli 
> > Sent: Wednesday, August 17, 2016 

Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-18 Thread Chris Nauroth
Andrew, thanks for adding your perspective on this.

What is a realistic strategy for us to evolve the HDFS audit log in a 
backward-compatible way?  If the API is essentially any form of ad-hoc 
scripting, then for any proposed audit log format change, I can find a reason 
to veto it on grounds of backward incompatibility.

- I can’t add a new field on the end, because that would break an awk script 
that uses $NF expecting to find a specific field.
- I can’t prepend a new field, because that would break a "cut -f1" expecting 
to find the timestamp.
- HDFS can’t add any new features, because someone might have written a script 
that does "exit 1" if it finds an unexpected RPC in the "cmd=" field.
- Hadoop is not allowed to add full IPv6 support, because someone might have 
written a script that looks at the "ip=" field and parses it by IPv4 syntax.

On the CLI, a potential solution for evolving the output is to preserve the old 
format by default and only enable the new format if the user explicitly passes 
a new argument.  What should we do for the audit log?  Configuration flags in 
hdfs-site.xml?  (That of course adds its own brand of complexity.)

I’m particularly interested to hear potential solutions from people like Andrew 
and Allen who have been most vocal about the need for a stable format.  Without 
a solution, this unfortunately devolves into the format being frozen within a 
major release line.

We could benefit from getting a patch on the compatibility doc that addresses 
the HDFS audit log specifically. 

--Chris Nauroth

On 8/18/16, 8:47 AM, "Andrew Purtell"  wrote:

An incompatible APIs change is developer unfriendly. An incompatible 
behavioral change is operator unfriendly. Historically, one dimension of 
incompatibility has had a lot more mindshare than the other. It's great that 
this might be changing for the better. 

Where I work when we move from one Hadoop 2.x minor to another we always 
spend time updating our deployment plans, alerting, log scraping, and related 
things due to changes. Some are debatable as if qualifying for the 
'incompatible' designation. I think the audit logging change that triggered 
this discussion is a good example of one that does. If you want to audit HDFS 
actions those log emissions are your API. (Inotify doesn't offer access control 
events.) One has to code regular expressions for parsing them and reverse 
engineer under what circumstances an audit line is emitted so you can make 
assumptions about what transpired. Change either and you might break someone's 
automation for meeting industry or legal compliance obligations. Not a trivial 
matter. If you don't operate Hadoop in production you might not realize the 
implications of such a change. Glad to see Hadoop has community diversity to 
recognize it in some cases. 

> On Aug 18, 2016, at 6:57 AM, Junping Du  wrote:
> 
> I think Allen's previous comments are very misleading. 
> In my understanding, only incompatible API (RPC, CLIs, WebService, etc.) 
shouldn't land on branch-2, but other incompatible behaviors (logs, audit-log, 
daemon's restart, etc.) should get flexible for landing. Otherwise, how could 
52 issues ( https://s.apache.org/xJk5) marked with incompatible-changes could 
get landed on branch-2 after 2.2.0 release? Most of them are already released. 
> 
> Thanks,
> 
> Junping
> 
> From: Vinod Kumar Vavilapalli 
> Sent: Wednesday, August 17, 2016 9:29 PM
> To: Allen Wittenauer
> Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.7.3 RC1
> 
> I always look at CHANGES.txt entries for incompatible-changes and this 
JIRA obviously wasn’t there.
> 
> Anyways, this shouldn’t be in any of branch-2.* as committers there 
clearly mentioned that this is an incompatible change.
> 
> I am reverting the patch from branch-2* .
> 
> Thanks
> +Vinod
> 
>> On Aug 16, 2016, at 9:29 PM, Allen Wittenauer 
 wrote:
>> 
>> 
>> 
>> -1
>> 
>> HDFS-9395 is an incompatible change:
>> 
>> a) Why is not marked as such in the changes file?
>> b) Why is an incompatible change in a micro release, much less a minor?
>> c) Where is the release note for this change?
>> 
>> 
>>> On Aug 12, 2016, at 9:45 AM, Vinod Kumar Vavilapalli 
 wrote:
>>> 
>>> Hi all,
>>> 
>>> I've created a release candidate RC1 for Apache Hadoop 2.7.3.
>>> 
>>> As discussed before, this is the next maintenance release to follow up 
2.7.2.
>>> 
>>> The RC is available for validation at: 
http://home.apache.org/~vinodkv/hadoop-2.7.3-RC1/ 

>>> 
>>> The RC tag in git is: release-2.7.3-RC1
>>> 
>>> The maven artifacts are available vi

Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-18 Thread larry mccay
I believe it was described as some previous audit entries have been
superseded by new ones and that the order may no longer be the same for
other entries.

For what it’s worth, I agree with the assertion that this is a backward
incompatible output - especially for audit logs.

On Thu, Aug 18, 2016 at 11:32 AM, Steve Loughran 
wrote:

>
> > On 18 Aug 2016, at 14:57, Junping Du  wrote:
> >
> > I think Allen's previous comments are very misleading.
> > In my understanding, only incompatible API (RPC, CLIs, WebService, etc.)
> shouldn't land on branch-2, but other incompatible behaviors (logs,
> audit-log, daemon's restart, etc.) should get flexible for landing.
> Otherwise, how could 52 issues ( https://s.apache.org/xJk5) marked with
> incompatible-changes could get landed on branch-2 after 2.2.0 release? Most
> of them are already released.
> >
> > Thanks,
> >
> > Junping
>
>
> Don't get AW started on compatiblity; it'll only upset him.
>
> One thing he does care about is the ability of programs to consume the
> output of commands and logs —and for that even the output of commands and
> logs need to continue to be parseable
>
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/
> Compatibility.html#Command_Line_Interface_CLI
>
> " Changing the path of a command, removing or renaming command line
> options, the order of arguments, or the command return code and output
> break compatibility and may adversely affect users."
>
> I believe Allen is particularly concerned that a minor point release is
> going in as incompatible, on the basis the audit log output will change
> —that's the log that is explicitly designed for machine processing, hooking
> up to flume & kafka, etc. As example, Spotify spoke at a Hadoop Summit
> conference about how they used it to identify files which hadn't been used
> for a long time; inferring an atime attribute from the access history.
>
> What has changed in the output?
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-18 Thread Andrew Purtell
An incompatible APIs change is developer unfriendly. An incompatible behavioral 
change is operator unfriendly. Historically, one dimension of incompatibility 
has had a lot more mindshare than the other. It's great that this might be 
changing for the better. 

Where I work when we move from one Hadoop 2.x minor to another we always spend 
time updating our deployment plans, alerting, log scraping, and related things 
due to changes. Some are debatable as if qualifying for the 'incompatible' 
designation. I think the audit logging change that triggered this discussion is 
a good example of one that does. If you want to audit HDFS actions those log 
emissions are your API. (Inotify doesn't offer access control events.) One has 
to code regular expressions for parsing them and reverse engineer under what 
circumstances an audit line is emitted so you can make assumptions about what 
transpired. Change either and you might break someone's automation for meeting 
industry or legal compliance obligations. Not a trivial matter. If you don't 
operate Hadoop in production you might not realize the implications of such a 
change. Glad to see Hadoop has community diversity to recognize it in some 
cases. 

> On Aug 18, 2016, at 6:57 AM, Junping Du  wrote:
> 
> I think Allen's previous comments are very misleading. 
> In my understanding, only incompatible API (RPC, CLIs, WebService, etc.) 
> shouldn't land on branch-2, but other incompatible behaviors (logs, 
> audit-log, daemon's restart, etc.) should get flexible for landing. 
> Otherwise, how could 52 issues ( https://s.apache.org/xJk5) marked with 
> incompatible-changes could get landed on branch-2 after 2.2.0 release? Most 
> of them are already released. 
> 
> Thanks,
> 
> Junping
> 
> From: Vinod Kumar Vavilapalli 
> Sent: Wednesday, August 17, 2016 9:29 PM
> To: Allen Wittenauer
> Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.7.3 RC1
> 
> I always look at CHANGES.txt entries for incompatible-changes and this JIRA 
> obviously wasn’t there.
> 
> Anyways, this shouldn’t be in any of branch-2.* as committers there clearly 
> mentioned that this is an incompatible change.
> 
> I am reverting the patch from branch-2* .
> 
> Thanks
> +Vinod
> 
>> On Aug 16, 2016, at 9:29 PM, Allen Wittenauer  
>> wrote:
>> 
>> 
>> 
>> -1
>> 
>> HDFS-9395 is an incompatible change:
>> 
>> a) Why is not marked as such in the changes file?
>> b) Why is an incompatible change in a micro release, much less a minor?
>> c) Where is the release note for this change?
>> 
>> 
>>> On Aug 12, 2016, at 9:45 AM, Vinod Kumar Vavilapalli  
>>> wrote:
>>> 
>>> Hi all,
>>> 
>>> I've created a release candidate RC1 for Apache Hadoop 2.7.3.
>>> 
>>> As discussed before, this is the next maintenance release to follow up 
>>> 2.7.2.
>>> 
>>> The RC is available for validation at: 
>>> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC1/ 
>>> 
>>> 
>>> The RC tag in git is: release-2.7.3-RC1
>>> 
>>> The maven artifacts are available via repository.apache.org 
>>>  at 
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1045/ 
>>> 
>>> 
>>> The release-notes are inside the tar-balls at location 
>>> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I 
>>> hosted this at home.apache.org/~vinodkv/hadoop-2.7.3-RC1/releasenotes.html 
>>>  for 
>>> your quick perusal.
>>> 
>>> As you may have noted,
>>> - few issues with RC0 forced a RC1 [1]
>>> - a very long fix-cycle for the License & Notice issues (HADOOP-12893) 
>>> caused 2.7.3 (along with every other Hadoop release) to slip by quite a 
>>> bit. This release's related discussion thread is linked below: [2].
>>> 
>>> Please try the release and vote; the vote will run for the usual 5 days.
>>> 
>>> Thanks,
>>> Vinod
>>> 
>>> [1] [VOTE] Release Apache Hadoop 2.7.3 RC0: 
>>> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106 
>>> 
>>> [2]: 2.7.3 release plan: 
>>> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 
>>> 
>> 
>> 
>> -
>> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> 
> 
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> 
> 
> --

Re: adding contributor roles timing out again

2016-08-18 Thread Chris Nauroth
It’s odd that Firefox didn’t work for you.  My standard workaround is to use 
Firefox, and that’s what I just did successfully for shenyinjie.

It’s quite mysterious to me that this problem would be browser-specific at all 
though.

--Chris Nauroth

On 8/18/16, 6:53 AM, "Steve Loughran"  wrote:


I'm trying to add a new contributor, "shenyinjie", but I'm getting the 
couldn't connect to server message; tried on chrome and firefox, and tried to 
paste the username in rather than rely on any popup completion.

no joy: has anyone else succeeded at this recently? 




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org






[jira] [Reopened] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-13516:
-
  Assignee: Steve Loughran  (was: Lei (Eddy) Xu)

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-18 Thread Steve Loughran

> On 18 Aug 2016, at 14:57, Junping Du  wrote:
> 
> I think Allen's previous comments are very misleading. 
> In my understanding, only incompatible API (RPC, CLIs, WebService, etc.) 
> shouldn't land on branch-2, but other incompatible behaviors (logs, 
> audit-log, daemon's restart, etc.) should get flexible for landing. 
> Otherwise, how could 52 issues ( https://s.apache.org/xJk5) marked with 
> incompatible-changes could get landed on branch-2 after 2.2.0 release? Most 
> of them are already released. 
> 
> Thanks,
> 
> Junping


Don't get AW started on compatiblity; it'll only upset him.

One thing he does care about is the ability of programs to consume the output 
of commands and logs —and for that even the output of commands and logs need to 
continue to be parseable

https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Compatibility.html#Command_Line_Interface_CLI

" Changing the path of a command, removing or renaming command line options, 
the order of arguments, or the command return code and output break 
compatibility and may adversely affect users."

I believe Allen is particularly concerned that a minor point release is going 
in as incompatible, on the basis the audit log output will change —that's the 
log that is explicitly designed for machine processing, hooking up to flume & 
kafka, etc. As example, Spotify spoke at a Hadoop Summit conference about how 
they used it to identify files which hadn't been used for a long time; 
inferring an atime attribute from the access history.

What has changed in the output?

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13516.
-
   Resolution: Duplicate
 Assignee: Lei (Eddy) Xu
Fix Version/s: 2.8.0

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Shaik Idris Ali (JIRA)
Shaik Idris Ali created HADOOP-13516:


 Summary: Listing an empty s3a NON root directory throws 
FileNotFound.
 Key: HADOOP-13516
 URL: https://issues.apache.org/jira/browse/HADOOP-13516
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.0
Reporter: Shaik Idris Ali
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.8.0


With an empty s3 bucket and run

{code}
$ hadoop fs -D... -ls s3a://hdfs-s3a-test/

15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
ls: `s3a://hdfs-s3a-test/': No such file or directory

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/

[Aug 17, 2016 10:04:58 AM] (vvasudev) YARN-5455. Update Javadocs for 
LinuxContainerExecutor. Contributed by
[Aug 17, 2016 4:22:31 PM] (jlowe) MAPREDUCE-6690. Limit the number of resources 
a single map reduce job
[Aug 17, 2016 8:15:33 PM] (varunsaxena) YARN-5523. Yarn running container log 
fetching causes OutOfMemoryError
[Aug 17, 2016 8:53:03 PM] (kihwal) HDFS-10745. Directly resolve paths into 
INodesInPath. Contributed by
[Aug 17, 2016 9:54:54 PM] (cnauroth) HADOOP-13208. S3A 
listFiles(recursive=true) to do a bulk listObjects
[Aug 17, 2016 10:00:14 PM] (aengineer) HADOOP-11786. Fix Javadoc typos in 
org.apache.hadoop.fs.FileSystem.
[Aug 17, 2016 10:52:38 PM] (xiao) HDFS-10549. Correctly revoke file leases when 
closing files. Contributed
[Aug 17, 2016 11:29:08 PM] (arp) HDFS-10773. BlockSender should not synchronize 
on the dataset object.
[Aug 18, 2016 12:40:20 AM] (kasha) YARN-4702. FairScheduler: Allow setting 
maxResources for ad hoc queues.




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestAppendSnapshotTruncate 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestYarnClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/diff-compile-javac-root.txt
  [172K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/137/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-18 Thread Junping Du
I think Allen's previous comments are very misleading. 
In my understanding, only incompatible API (RPC, CLIs, WebService, etc.) 
shouldn't land on branch-2, but other incompatible behaviors (logs, audit-log, 
daemon's restart, etc.) should get flexible for landing. Otherwise, how could 
52 issues ( https://s.apache.org/xJk5) marked with incompatible-changes could 
get landed on branch-2 after 2.2.0 release? Most of them are already released. 

Thanks,

Junping

From: Vinod Kumar Vavilapalli 
Sent: Wednesday, August 17, 2016 9:29 PM
To: Allen Wittenauer
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

I always look at CHANGES.txt entries for incompatible-changes and this JIRA 
obviously wasn’t there.

Anyways, this shouldn’t be in any of branch-2.* as committers there clearly 
mentioned that this is an incompatible change.

I am reverting the patch from branch-2* .

Thanks
+Vinod

> On Aug 16, 2016, at 9:29 PM, Allen Wittenauer  
> wrote:
>
>
>
> -1
>
> HDFS-9395 is an incompatible change:
>
> a) Why is not marked as such in the changes file?
> b) Why is an incompatible change in a micro release, much less a minor?
> c) Where is the release note for this change?
>
>
>> On Aug 12, 2016, at 9:45 AM, Vinod Kumar Vavilapalli  
>> wrote:
>>
>> Hi all,
>>
>> I've created a release candidate RC1 for Apache Hadoop 2.7.3.
>>
>> As discussed before, this is the next maintenance release to follow up 2.7.2.
>>
>> The RC is available for validation at: 
>> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC1/ 
>> 
>>
>> The RC tag in git is: release-2.7.3-RC1
>>
>> The maven artifacts are available via repository.apache.org 
>>  at 
>> https://repository.apache.org/content/repositories/orgapachehadoop-1045/ 
>> 
>>
>> The release-notes are inside the tar-balls at location 
>> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I 
>> hosted this at home.apache.org/~vinodkv/hadoop-2.7.3-RC1/releasenotes.html 
>>  for 
>> your quick perusal.
>>
>> As you may have noted,
>> - few issues with RC0 forced a RC1 [1]
>> - a very long fix-cycle for the License & Notice issues (HADOOP-12893) 
>> caused 2.7.3 (along with every other Hadoop release) to slip by quite a bit. 
>> This release's related discussion thread is linked below: [2].
>>
>> Please try the release and vote; the vote will run for the usual 5 days.
>>
>> Thanks,
>> Vinod
>>
>> [1] [VOTE] Release Apache Hadoop 2.7.3 RC0: 
>> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106 
>> 
>> [2]: 2.7.3 release plan: 
>> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 
>> 
>
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



adding contributor roles timing out again

2016-08-18 Thread Steve Loughran

I'm trying to add a new contributor, "shenyinjie", but I'm getting the couldn't 
connect to server message; tried on chrome and firefox, and tried to paste the 
username in rather than rely on any popup completion.

no joy: has anyone else succeeded at this recently? 




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13271) Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13271.
-
   Resolution: Cannot Reproduce
Fix Version/s: 2.8.0

> Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory
> -
>
> Key: HADOOP-13271
> URL: https://issues.apache.org/jira/browse/HADOOP-13271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
>
> I'm seeing an intermittent failure of 
> {{TestS3AContractRootDir.testListEmptyRootDirectory}}
> The sequence of : deleteFiles(listStatus(Path("/)")) is failing because the 
> file to delete is root ...yet the code is passing in the children of /, not / 
> itself.
> hypothesis: when you call listStatus on an empty root dir, you get a file 
> entry back that says isFile, not isDirectory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13515) Redundant transitionToActive call can cause a NameNode to crash

2016-08-18 Thread Harsh J (JIRA)
Harsh J created HADOOP-13515:


 Summary: Redundant transitionToActive call can cause a NameNode to 
crash
 Key: HADOOP-13515
 URL: https://issues.apache.org/jira/browse/HADOOP-13515
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.5.0
Reporter: Harsh J
Priority: Minor


The situation in parts is similar to HADOOP-8217, but the cause is different 
and so is the result.

Consider this situation:

- At the beginning NN1 is Active, NN2 is Standby
- ZKFC1 faces a ZK disconnect (not a session timeout, just a socket disconnect) 
and thereby reconnects

{code}
2016-08-11 07:00:46,068 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 4000ms for sessionid 
0x4566f0c97500bd9, closing socket connection and attempting reconnect
2016-08-11 07:00:46,169 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
disconnected. Entering neutral mode...
…
2016-08-11 07:00:46,610 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
connected.
{code}

- The reconnection on the ZKFC1 triggers the elector code, and the elector 
re-run finds that NN1 should be the new active (a redundant decision cause NN1 
is already active)

{code}
2016-08-11 07:00:46,615 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
Checking for any old active which needs to be fenced...
2016-08-11 07:00:46,630 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old 
node exists: …
2016-08-11 07:00:46,630 INFO org.apache.hadoop.ha.ActiveStandbyElector: But old 
node has our own data, so don't need to fence it.
{code}

- The ZKFC1 sets the new ZK data, and fires a NN1 RPC call of transitionToActive

{code}
2016-08-11 07:00:46,630 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing 
znode /hadoop-ha/nameservice1/ActiveBreadCrumb to indicate that the local node 
is the most recent active...
2016-08-11 07:00:46,649 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: 175: 
Call -> nn01/10.10.10.10:8022: transitionToActive {reqInfo { reqSource: 
REQUEST_BY_ZKFC }}
{code}

- At the same time as the transitionToActive call is in progress at NN1, but 
not complete yet, the ZK session of ZKFC1 is timed out by ZK Quorum, and a 
watch notification is sent to ZKFC2

{code}
2016-08-11 07:01:00,003 DEBUG org.apache.zookeeper.ClientCnxn: Got notification 
sessionid:0x4566f0c97500bde
2016-08-11 07:01:00,004 DEBUG org.apache.zookeeper.ClientCnxn: Got WatchedEvent 
state:SyncConnected type:NodeDeleted 
path:/hadoop-ha/nameservice1/ActiveStandbyElectorLock for sessionid 
0x4566f0c97500bde
{code}

- ZKFC2 responds by marking NN2 as standby, which succeeds (NN hasn't handled 
transitionToActive call yet due to busy status, but has handled 
transitionToStandby before it)

{code}
2016-08-11 07:01:00,013 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
Checking for any old active which needs to be fenced...
2016-08-11 07:01:00,018 INFO org.apache.hadoop.ha.ZKFailoverController: Should 
fence: NameNode at nn01/10.10.10.10:8022
2016-08-11 07:01:00,020 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: 412: 
Call -> nn01/10.10.10.10:8022: transitionToStandby {reqInfo { reqSource: 
REQUEST_BY_ZKFC }}
2016-08-11 07:01:03,880 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: Call: 
transitionToStandby took 3860ms
{code}

- ZKFC2 then marks NN2 as active, and NN2 begins its transition (is in midst of 
it, not done yet at this point)

{code}
2016-08-11 07:01:03,894 INFO org.apache.hadoop.ha.ZKFailoverController: Trying 
to make NameNode at nn02/11.11.11.11:8022 active...
2016-08-11 07:01:03,895 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: 412: 
Call -> nn02/11.11.11.11:8022: transitionToActive {reqInfo { reqSource: 
REQUEST_BY_ZKFC }}
…
{code}

{code}
2016-08-11 07:01:09,558 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required 
for active state
…
2016-08-11 07:01:19,968 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Will take over writing 
edit logs at txnid 5635
{code}

- At the same time in parallel NN1 processes the transitionToActive requests 
finally, and becomes active

{code}
2016-08-11 07:01:13,281 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required 
for active state
…
2016-08-11 07:01:19,599 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Will take over writing 
edit logs at txnid 5635
…
2016-08-11 07:01:19,602 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Starting log segment at 5635
{code}

- NN2's active transition fails as a result of this parallel active transition 
on NN1 which has completed right before it tries to take over

{code}
2016-08-11 07:01:19,968 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Will take over writing 
edit logs at txnid 5635
2016-08-11 07:01:22,799 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: 
Error encountered requiring NN shutdown.

[jira] [Created] (HADOOP-13514) Upgrade surefire to 2.19.1

2016-08-18 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-13514:
---

 Summary: Upgrade surefire to 2.19.1
 Key: HADOOP-13514
 URL: https://issues.apache.org/jira/browse/HADOOP-13514
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ewan Higgs
Priority: Minor


A lot of people working on Hadoop don't want to run all the tests when they 
develop; only the bits they're working on. Surefire 2.19 introduced more useful 
test filters which let us run a subset of the tests that brings the build time 
down from 'come back tomorrow' to 'grab a coffee'.

For instance, if I only care about the S3 adaptor, I might run:

{code}
mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
\"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
org.apache.hadoop.fs.s3a.*\"
{code}

We can work around this by specifying the surefire version on the command line 
but it would be better, imo, to just update the default surefire used.

{code}
mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
\"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Tibor Kiss (JIRA)
Tibor Kiss created HADOOP-13513:
---

 Summary: Java 1.7 support for org.apache.hadoop.fs.azure testcases
 Key: HADOOP-13513
 URL: https://issues.apache.org/jira/browse/HADOOP-13513
 Project: Hadoop Common
  Issue Type: Bug
  Components: azure
Affects Versions: 2.9.0
Reporter: Tibor Kiss
Assignee: Tibor Kiss
Priority: Minor
 Fix For: 2.9.0


Recent improvement on AzureNativeFileSystem rename/delete performance 
(HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible with 
Java 1.7. 

If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
(e.g. 2.7.x) the following error occurs during test run:
{code}
initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
  Time elapsed: 0.001 sec  <<< ERROR!
java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
should be public
at 
org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
at 
org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
at 
org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
at 
org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
at 
org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
at org.junit.runners.ParentRunner.(ParentRunner.java:76)
at 
org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
at 
org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
at 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
at 
org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
at 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
at 
org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}

The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13512:
--

 Summary: ReloadingX509TrustManager should keep reloading in case 
of exception
 Key: HADOOP-13512
 URL: https://issues.apache.org/jira/browse/HADOOP-13512
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


{{org.apache.hadoop.security.ssl.TestReloadingX509TrustManager}} checks the key 
store file's last modified time to decide whether to reload.  This is to avoid 
unnecessary reload if the key store file is not changed. To do this, it 
maintains an internal state {{lastLoaded}} whenever it tries to reload a file. 
It also updates the {{lastLoaded}} variable in case of exception so failing 
reload will not be retried until the key store file's last modified time 
changes again.

Chances are that the reload happens when the key store file is being written. 
The reload fails (probably with EOFException) and won't load until key store 
files's last modified time changes. After a short period, the key store file is 
closed after update. However, the last modified time may not be updated as if 
it's in the same precision period (e.g. 1 second). In this case, the updated 
key store file is never reloaded.

A simple fix is to update the {{lastLoaded}} only when the reload succeeds. 
{{ReloadingX509TrustManager}} will keep reloading in case of exception.

Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org