[jira] [Created] (HADOOP-13056) Print expected values when rejecting a server's determined principal

2016-04-22 Thread Harsh J (JIRA)
Harsh J created HADOOP-13056:


 Summary: Print expected values when rejecting a server's 
determined principal
 Key: HADOOP-13056
 URL: https://issues.apache.org/jira/browse/HADOOP-13056
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial


When an address-constructed service principal by a client does not match a 
provided pattern or the configured principal property, the error is very 
uninformative on what the specific cause is. Currently the only error printed 
is, in both cases:

{code}
 java.lang.IllegalArgumentException: Server has invalid Kerberos principal: 
hdfs/host.internal@REALM
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Looking to a Hadoop 3 release

2016-04-22 Thread Vinod Kumar Vavilapalli
Tx for your replies, Andrew.

>> For exit criteria, how about we time box it? My plan was to do monthly
> alphas through the summer, leading up to beta in late August / early Sep.
> At that point we freeze and stabilize for GA in Nov/Dec.


Time-boxing is a reasonable exit-criterion.


> In this case, does trunk-incompat essentially become the new trunk? Or are
> we treating trunk-incompat as a feature branch, which periodically merges
> changes from trunk?


It’s the later. Essentially
 - trunk-incompat = trunk + only incompatible changes, periodically kept 
up-to-date to trunk
 - trunk is always ready to ship
 - and no compatible code gets left behind

The reason for my proposal like this is to address the tension between “there 
is lot of compatible code in trunk that we are not shipping” and “don’t ship 
trunk, it has incompatibilities”. With this, we will not have (compatible) code 
not getting shipped to users.

Obviously, we can forget about all of my proposal completely if everyone puts 
in all compatible code into branch-2 / branch-3 or whatever the main releasable 
branch is. This didn’t work in practice, have seen this not happening 
prominently during 0.21, and now 3.x.

There is another related issue - "my feature is nearly ready, so I’ll just 
merge it into trunk as we don’t release that anyways, but not the current 
releasable branch - I’m lazy to fix the last few stability related issues”. 
With this, we will (should) get more disciplined, take feature stability on a 
branch seriously and merge a feature branch only when it is truly ready!

> For 3.x, my strawman was to release off trunk for the alphas, then branch a
> branch-3 for the beta and onwards.


Repeating above, I’m proposing continuing to make GA 3.x releases also off of 
trunk! This way only incompatible changes don’t get shipped to users - by 
design! Eventually, trunk-incompat will be latest 3.x GA + enough incompatible 
code to warrant a 4.x, 5.x etc.

+Vinod

Re: Looking to a Hadoop 3 release

2016-04-22 Thread Allen Wittenauer

> On Apr 22, 2016, at 6:10 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
> Nope.
> 
> I’m proposing making a new 3.x release (as has been discussed in this thread) 
> off today’s trunk (instead of creating a fresh branch-3) and create a new 
> trunk-incompt where incompatible changes that we don’t want in 3.x go.
> 
> This is mainly to avoid repeating the “we are not releasing 3.x off trunk” 
> issue when we start thinking about 4.x or any such major release in the 
> future.

The only difference between “we aren’t releasing 4.x off of trunk” and 
“we aren’t releasing 4.x off of trunk-incompat” is 10 characters.

Re: Looking to a Hadoop 3 release

2016-04-22 Thread Andrew Wang
Great comments Vinod, thanks for replying.

Since trunk is a superset of branch-2.8, I think the two efforts are mostly
aligned. The 2.8 blockers are likely also 3.0 blockers. For example, the
create-release and L JIRAs I mentioned are in this camp. The difference
between the two is the expectation as to the level of quality. Once we get
create-release and L settled, I think it's ready for an alpha. Yes, this
means we ship with some known issues, but right now there's no 3.0 artifact
for downstreams to compile and test against. Considering that we're
shipping incompatible changes, I want to give downstreams as much
opportunity to give feedback as possible.

While welcoming the push for alphas, i think we should set some exit
> criteria. Otherwise, I can imagine us doing 3/4/5 alpha releases, and then
> getting restless about calling it beta or GA of whatever. Essentially,
> instead of today’s questions as to "why we aren’t doing a 3.x release",
> we’d be fielding a "why is 3.x still considered alpha” question. This
> happened with 2.x alpha releases too and it wasn’t fun.
>
> For exit criteria, how about we time box it? My plan was to do monthly
alphas through the summer, leading up to beta in late August / early Sep.
At that point we freeze and stabilize for GA in Nov/Dec.

I think we all have an interest in declaring beta/GA, no one wants eternal
alpha releases.

On an unrelated note, offline I was pitching to a bunch of contributors
> another idea to deal with rotting trunk post 3.x: *Make 3.x releases off of
> trunk directly*.
>
> What this gains us is that
>  - Trunk is always nearly stable or nearly ready for releases
>  - We no longer have some code lying around in some branch (today’s trunk)
> that is not releasable because it gets mixed with other undesirable and
> incompatible changes.
>  - This needs to be coupled with more discipline on individual features -
> medium to to large features are always worked upon in branches and get
> merged into trunk (and a nearing release!) when they are ready
>  - All incompatible changes go into some sort of a trunk-incompat branch
> and stay there till we accumulate enough of those to warrant another major
> release.
>

In this case, does trunk-incompat essentially become the new trunk? Or are
we treating trunk-incompat as a feature branch, which periodically merges
changes from trunk?

Linux has a "next" branch for separate from master for integrating pending
feature branches. I think this is a good model, and would be even better if
we published artifacts to assist with testing. However, that depends on
someone stepping up to be the maintainer of the integration branch.

I really like a more stringent policy around branch merges and new feature
development. That'd be great.

For 3.x, my strawman was to release off trunk for the alphas, then branch a
branch-3 for the beta and onwards.

Best,
Andrew


Re: Looking to a Hadoop 3 release

2016-04-22 Thread Vinod Kumar Vavilapalli
Nope.

I’m proposing making a new 3.x release (as has been discussed in this thread) 
off today’s trunk (instead of creating a fresh branch-3) and create a new 
trunk-incompt where incompatible changes that we don’t want in 3.x go.

This is mainly to avoid repeating the “we are not releasing 3.x off trunk” 
issue when we start thinking about 4.x or any such major release in the future.

We’ll do 2.8.x independently and later figure out if 2.9 is needed or not.

+Vinod

> On Apr 22, 2016, at 5:59 PM, Allen Wittenauer  wrote:
> 
> 
>> On Apr 22, 2016, at 5:38 PM, Vinod Kumar Vavilapalli  
>> wrote:
>> 
>> On an unrelated note, offline I was pitching to a bunch of contributors 
>> another idea to deal with rotting trunk post 3.x: *Make 3.x releases off of 
>> trunk directly*.
>> 
>> What this gains us is that
>> - Trunk is always nearly stable or nearly ready for releases
>> - We no longer have some code lying around in some branch (today’s trunk) 
>> that is not releasable because it gets mixed with other undesirable and 
>> incompatible changes.
>> - This needs to be coupled with more discipline on individual features - 
>> medium to to large features are always worked upon in branches and get 
>> merged into trunk (and a nearing release!) when they are ready
>> - All incompatible changes go into some sort of a trunk-incompat branch and 
>> stay there till we accumulate enough of those to warrant another major 
>> release.
>> 
>> Thoughts?
> 
>   Unless I’m missing something, all this proposal does is (using today’s 
> branch names) effectively rename trunk to trunk-incompat and branch-2 to 
> trunk.  I’m unclear how moving "rotting trunk” to “rotting trunk-incompat” is 
> really progress.
> 
> 



Re: Looking to a Hadoop 3 release

2016-04-22 Thread Allen Wittenauer

> On Apr 22, 2016, at 5:38 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
> On an unrelated note, offline I was pitching to a bunch of contributors 
> another idea to deal with rotting trunk post 3.x: *Make 3.x releases off of 
> trunk directly*.
> 
> What this gains us is that
> - Trunk is always nearly stable or nearly ready for releases
> - We no longer have some code lying around in some branch (today’s trunk) 
> that is not releasable because it gets mixed with other undesirable and 
> incompatible changes.
> - This needs to be coupled with more discipline on individual features - 
> medium to to large features are always worked upon in branches and get merged 
> into trunk (and a nearing release!) when they are ready
> - All incompatible changes go into some sort of a trunk-incompat branch and 
> stay there till we accumulate enough of those to warrant another major 
> release.
> 
> Thoughts?

Unless I’m missing something, all this proposal does is (using today’s 
branch names) effectively rename trunk to trunk-incompat and branch-2 to trunk. 
 I’m unclear how moving "rotting trunk” to “rotting trunk-incompat” is really 
progress.



Re: Looking to a Hadoop 3 release

2016-04-22 Thread Vinod Kumar Vavilapalli
Hi,

While welcoming the push for alphas, i think we should set some exit criteria. 
Otherwise, I can imagine us doing 3/4/5 alpha releases, and then getting 
restless about calling it beta or GA of whatever. Essentially, instead of 
today’s questions as to "why we aren’t doing a 3.x release", we’d be fielding a 
"why is 3.x still considered alpha” question. This happened with 2.x alpha 
releases too and it wasn’t fun.

On an unrelated note, offline I was pitching to a bunch of contributors another 
idea to deal with rotting trunk post 3.x: *Make 3.x releases off of trunk 
directly*.

What this gains us is that
 - Trunk is always nearly stable or nearly ready for releases
 - We no longer have some code lying around in some branch (today’s trunk) that 
is not releasable because it gets mixed with other undesirable and incompatible 
changes.
 - This needs to be coupled with more discipline on individual features - 
medium to to large features are always worked upon in branches and get merged 
into trunk (and a nearing release!) when they are ready
 - All incompatible changes go into some sort of a trunk-incompat branch and 
stay there till we accumulate enough of those to warrant another major release.

Thoughts?

+Vinod


> On Apr 21, 2016, at 4:31 PM, Andrew Wang  wrote:
> 
> Hi folks,
> 
> Very optimistically, we're still on track for a 3.0 alpha this month.
> Here's a JIRA query for 3.0 and 2.8:
> 
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20MAPREDUCE%2C%20YARN)%20AND%20%22Target%20Version%2Fs%22%20in%20(3.0.0%2C%202.8.0)%20AND%20statusCategory%20not%20in%20(Complete)%20ORDER%20BY%20priority
> 
> I think two of these are true alpha blockers: HADOOP-12892 and
> HADOOP-12893. I'm trying to help push both of those forward.
> 
> For the rest, I think it's probably okay to delay until the next alpha,
> since we're planning a few alphas leading up to beta. That said, if you are
> the owner of a Blocker targeted at 3.0.0, I'd encourage reviving those
> patches. The earlier the better for incompatible changes.
> 
> In all likelihood, this first release will slip into early May, but I'll be
> disappointed if we don't have an RC out before ApacheCon.
> 
> Best,
> Andrew
> 
> On Mon, Feb 22, 2016 at 3:19 PM, Colin P. McCabe  wrote:
> 
>> I think starting a 3.0 alpha soon would be a great idea.  As some
>> other people commented, this would come with no compatibility
>> guarantees, so that we can iron out any issues.
>> 
>> Colin
>> 
>> On Mon, Feb 22, 2016 at 1:26 PM, Zhe Zhang  wrote:
>>> Thanks Andrew for driving the effort!
>>> 
>>> +1 (non-binding) on starting the 3.0 release process now with 3.0 as an
>>> alpha.
>>> 
>>> I wanted to echo Andrew's point that backporting EC to branch-2 is a lot
>> of
>>> work. Considering that no concrete backporting plan has been proposed, it
>>> seems quite uncertain whether / when it can be released in 2.9. I think
>> we
>>> should rather concentrate our EC dev efforts to harden key features under
>>> the follow-on umbrella HDFS-8031 and make it solid for a 3.0 release.
>>> 
>>> Sincerely,
>>> Zhe
>>> 
>>> On Mon, Feb 22, 2016 at 9:25 AM Colin P. McCabe 
>> wrote:
>>> 
 +1 for a release of 3.0.  There are a lot of significant,
 compatibility-breaking, but necessary changes in this release... we've
 touched on some of them in this thread.
 
 +1 for a parallel release of 2.8 as well.  I think we are pretty close
 to this, barring a dozen or so blockers.
 
 best,
 Colin
 
 On Mon, Feb 22, 2016 at 2:56 AM, Steve Loughran >> 
 wrote:
> 
>> On 20 Feb 2016, at 15:34, Junping Du  wrote:
>> 
>> Shall we consolidate effort for 2.8.0 and 3.0.0? It doesn't sounds
 reasonable to have two alpha releases to go in parallel. Is EC feature
>> the
 main motivation of releasing hadoop 3 here? If so, I don't understand
>> why
 this feature cannot land on 2.8.x or 2.9.x as an alpha feature.
> 
> 
> 
>> If we release 3.0 in a month like plan proposed below, it means we
>> will
 have 4 active releases going in parallel - two alpha releases (2.8 and
>> 3.0)
 and two stable releases (2.6.x and 2.7.x). It brings a lot of
>> challenges in
 issues tracking and patch committing, not even mention the tremendous
 effort of release verification and voting.
>> I would like to propose to wait 2.8 release become stable (may be 2nd
 release in 2.8 branch cause first release is alpha due to discussion in
 another email thread), then we can move to 3.0 as the only alpha
>> release.
 In the meantime, we can bring more significant features (like ATS v2,
>> etc.)
 to trunk and consolidate stable releases in 2.6.x and 2.7.x. I believe
>> that
 make life easier. :)
>> Thoughts?
>> 
> 
> 2.8.0 is 

Re: Looking to a Hadoop 3 release

2016-04-22 Thread Vinod Kumar Vavilapalli
I kind of echo Junping’s comment too.

While 2.8 and 3.0 don’t need to be serialized in theory, in practice I’m 
desperately looking for help on 2.8.0. We haven’t been converging on 2.8.0 what 
with 50+ blocker / critical patches still unfinished. If postponing 3.x alpha 
to after a 2.8.0 alpha means undivided attention from the community, I’d 
strongly root for such a proposal.

Thanks
+Vinod

> On Feb 20, 2016, at 9:07 PM, Andrew Wang  wrote:
> 
> Hi Junping, thanks for the mail, inline:
> 
> On Sat, Feb 20, 2016 at 7:34 AM, Junping Du  wrote:
> 
>> Shall we consolidate effort for 2.8.0 and 3.0.0? It doesn't sounds
>> reasonable to have two alpha releases to go in parallel. Is EC feature the
>> main motivation of releasing hadoop 3 here? If so, I don't understand why
>> this feature cannot land on 2.8.x or 2.9.x as an alpha feature.
>> 
> 
> EC is one motivation, there are others too (JDK8, shell scripts, jar
> bumps). I'm open to EC going into branch-2, but I haven't seen any
> backporting yet and it's a lot of code.
> 
> 
>> If we release 3.0 in a month like plan proposed below, it means we will
>> have 4 active releases going in parallel - two alpha releases (2.8 and 3.0)
>> and two stable releases (2.6.x and 2.7.x). It brings a lot of challenges in
>> issues tracking and patch committing, not even mention the tremendous
>> effort of release verification and voting.
>> I would like to propose to wait 2.8 release become stable (may be 2nd
>> release in 2.8 branch cause first release is alpha due to discussion in
>> another email thread), then we can move to 3.0 as the only alpha release.
>> In the meantime, we can bring more significant features (like ATS v2, etc.)
>> to trunk and consolidate stable releases in 2.6.x and 2.7.x. I believe that
>> make life easier. :)
>> Thoughts?
>> 
>> Based on some earlier mails in this chain, I was planning to release off
> trunk. This way we avoid having to commit to yet-another-branch, and makes
> tracking easier since trunk will always be a superset of the branch-2's.
> This does mean though that trunk needs to be stable, and we need to be more
> judicious with branch merges, and quickly revert broken code.
> 
> Regarding RM/voting/validation efforts, Steve mentioned some scripts that
> he uses to automate Slider releases. This is something I'd like to bring
> over to Hadoop. Ideally, publishing an RC is push-button, and it comes with
> automated validation. I think this will help with the overhead. Also, since
> these will be early alphas, and there will be a lot of them, I'm not
> expecting anyone to do endurance runs on a large cluster before casting a
> +1.
> 
> Best,
> Andrew



Re: [Release thread] 2.8.0 release activities

2016-04-22 Thread Vinod Kumar Vavilapalli
We are not converging - there’s still 58 more. I need help from the community 
in addressing / review 2.8.0 blockers. If folks can start with reviewing Patch 
available tickets, that’ll be great.


Thanks
+Vinod

> On Apr 4, 2016, at 2:16 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
> Here we go again. The blocker / critical tickets ballooned up a lot, I see 64 
> now! : https://issues.apache.org/jira/issues/?filter=12334985
> 
> Also, the docs target (mvn package -Pdocs -DskipTests) is completely busted 
> on branch-2, I figured I have to backport a whole bunch of patches that are 
> only on trunk, and may be more fixes on top of that *sigh*
> 
> I’ll start pushing for progress for an RC in a week or two.
> 
> Any help towards this, reviewing/committing outstanding patches and 
> contributing to open items is greatly appreciated.
> 
> Thanks
> +Vinod
> 
>> On Feb 9, 2016, at 11:51 AM, Vinod Kumar Vavilapalli  
>> wrote:
>> 
>> Sure. The last time I checked, there were 20 odd blocker/critical tickets 
>> too that’ll need some of my time.
>> 
>> Given that, if you can get them in before a week, we should be good.
>> 
>> +Vinod
>> 
>>> On Feb 5, 2016, at 1:19 PM, Subramaniam V K  wrote:
>>> 
>>> Vinod,
>>> 
>>> Thanks for initiating the 2.8 release thread. We are in late review stages
>>> for YARN-4420 (Add REST API for listing reservations) and YARN-2575 (Adding
>>> ACLs for reservation system), hoping to get them by next week. Any chance
>>> you can put off cutting 2.8 by a week as we are planning to deploy
>>> ReservationSystem and these are critical for that?
>>> 
>>> Cheers,
>>> Subru
>>> 
>>> On Thu, Feb 4, 2016 at 3:17 PM, Chris Nauroth 
>>> wrote:
>>> 
 FYI, I've just needed to raise HDFS-9761 to blocker status for the 2.8.0
 release.
 
 --Chris Nauroth
 
 
 
 
 On 2/3/16, 6:19 PM, "Karthik Kambatla"  wrote:
 
> Thanks Vinod. Not labeling 2.8.0 stable sounds perfectly reasonable to me.
> Let us not call it alpha or beta though, it is quite confusing. :)
> 
> On Wed, Feb 3, 2016 at 8:17 PM, Gangumalla, Uma  
> wrote:
> 
>> Thanks Vinod. +1 for 2.8 release start.
>> 
>> Regards,
>> Uma
>> 
>> On 2/3/16, 3:53 PM, "Vinod Kumar Vavilapalli" 
>> wrote:
>> 
>>> Seems like all the features listed in the Roadmap wiki are in. I¹m
>> going
>>> to try cutting an RC this weekend for a first/non-stable release off of
>>> branch-2.8.
>>> 
>>> Let me know if anyone has any objections/concerns.
>>> 
>>> Thanks
>>> +Vinod
>>> 
 On Nov 25, 2015, at 5:59 PM, Vinod Kumar Vavilapalli
  wrote:
 
 Branch-2.8 is created.
 
 As mentioned before, the goal on branch-2.8 is to put improvements /
 fixes to existing features with a goal of converging on an alpha
>> release
 soon.
 
 Thanks
 +Vinod
 
 
> On Nov 25, 2015, at 5:30 PM, Vinod Kumar Vavilapalli
>  wrote:
> 
> Forking threads now in order to track all things related to the
> release.
> 
> Creating the branch now.
> 
> Thanks
> +Vinod
> 
> 
>> On Nov 25, 2015, at 11:37 AM, Vinod Kumar Vavilapalli
>>  wrote:
>> 
>> I think we¹ve converged at a high level w.r.t 2.8. And as I just
>> sent
>> out an email, I updated the Roadmap wiki reflecting the same:
>> https://wiki.apache.org/hadoop/Roadmap
>> 
>> 
>> I plan to create a 2.8 branch EOD today.
>> 
>> The goal for all of us should be to restrict improvements & fixes
>> to
>> only (a) the feature-set documented under 2.8 in the RoadMap wiki
>> and
>> (b) other minor features that are already in 2.8.
>> 
>> Thanks
>> +Vinod
>> 
>> 
>>> On Nov 11, 2015, at 12:13 PM, Vinod Kumar Vavilapalli
>>> > wrote:
>>> 
>>> - Cut a branch about two weeks from now
>>> - Do an RC mid next month (leaving ~4weeks since branch-cut)
>>> - As with 2.7.x series, the first release will still be called as
>>> early / alpha release in the interest of
>>> ‹ gaining downstream adoption
>>> ‹ wider testing,
>>> ‹ yet reserving our right to fix any inadvertent
>> incompatibilities
>>> introduced.
>> 
> 
 
>>> 
>> 
>> 
 
 
>> 
> 



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1383

2016-04-22 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : Hadoop-Common-trunk #2668

2016-04-22 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #1382

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HADOOP-13052. ChecksumFileSystem mishandles crc file permissions.

--
[...truncated 5548 lines...]
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.768 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.11 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.538 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.769 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.42 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.131 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.621 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.688 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.189 sec - 
in org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.321 sec - in 

Re: Commit History Edit Alert

2016-04-22 Thread Karthik Kambatla
FWIW, the Hadoop team at Cloudera, in the past, allowed force pushes for
our internal repo. It was common practice to email rest of the team when
force-pushing to make sure we don't lose commits. It was quite painful, and
we decided to disallow force pushes and life has gotten much better.

PS: We also use gerrit, so that serializes our commits for us.

On Fri, Apr 22, 2016 at 2:04 PM, Karthik Kambatla 
wrote:

> Owen, I agree force-pushes likely make for cleaner history, but they also
> allow losing commits in case of race conditions. Since the previous
> decision of disabling force pushes was discussed in a DISCUSS thread and
> others might want to weigh in with their opinions, mind starting a DISCUSS
> thread for changing this policy so others are in the know?
>
> Agree tagging releases is good practice. I believe we do this already and
> sign, but not sure if we place them under "rel/*" namespace.
>
> On Fri, Apr 22, 2016 at 12:07 PM, Owen O'Malley 
> wrote:
>
>> In my opinion, prohibiting forced updates causes more pain than it helps.
>>
>> The much more important part is that someone should make tags for the
>> recent releases in the "rel/*" namespace so that they can't be modified.
>> I'd suggest at least:
>>
>> release-2.6.4
>> release-2.7.2
>>
>> .. Owen
>>
>> On Fri, Apr 22, 2016 at 11:29 AM, Karthik Kambatla 
>> wrote:
>>
>> > Recent changes to branch policies have affected this. Our trunk seems
>> > protected, but not branch-* branches. I had filed INFRA-11236 to address
>> > this. I can't ping on the JIRA anymore because comments from
>> > non-contributors are disabled.
>> >
>> > To prevent unintentional major messes, it is highly recommended we don't
>> > force push. For JIRA ID mistakes in commit messages, we have been filing
>> > another (possibly empty) commit that just says there was a mistake. e.g.
>> > something along the lines of "Mistakenly committed HADOOP-13011 as
>> > HADOOP-13001."
>> >
>> > On Thu, Apr 21, 2016 at 9:42 PM, larry mccay  wrote:
>> >
>> > > I believe that he squashed my attempted --amend into a single commit
>> on
>> > > branch-2.8.
>> > > Not sure about trunk and branch-2.
>> > >
>> > > Thanks for the clarification on the formatting.
>> > > I will comply in the future.
>> > >
>> > > For such issues, is a dev@ email first better than trying to "fix"
>> it?
>> > >
>> > > Again, sorry for the inconvenience.
>> > >
>> > > On Fri, Apr 22, 2016 at 12:10 AM, Andrew Wang <
>> andrew.w...@cloudera.com>
>> > > wrote:
>> > >
>> > > > What does "fix" mean? We aren't supposed to force push to
>> non-feature
>> > > > branches, and actually thought this was disabled.
>> > > >
>> > > > Also FYI for the future, we normally format our commit messages with
>> > > > periods, e.g.:
>> > > >
>> > > > HADOOP-13011. Clearly Document the Password Details for
>> Keystore-based
>> > > > Credential Providers.
>> > > >
>> > > >
>> > > > On Thu, Apr 21, 2016 at 8:26 PM, larry mccay 
>> > wrote:
>> > > >
>> > > > > All -
>> > > > >
>> > > > > My first hadoop commit for HADOOP-13011 inadvertently referenced
>> the
>> > > > wrong
>> > > > > JIRA (HADOOP-13001) in the commit message.
>> > > > >
>> > > > > Owen O'Malley helped me out by fixing the history on all 3
>> branches:
>> > > > trunk,
>> > > > > branch-2, branch-2.8. The message is correct now in the current
>> > history
>> > > > but
>> > > > > you may need to rebase to the current history for things to align
>> > > > properly.
>> > > > >
>> > > > > I apologize for the inconvenience.
>> > > > >
>> > > > > thanks,
>> > > > >
>> > > > > --larry
>> > > > >
>> > > >
>> > >
>> >
>>
>
>


Re: Commit History Edit Alert

2016-04-22 Thread Karthik Kambatla
Owen, I agree force-pushes likely make for cleaner history, but they also
allow losing commits in case of race conditions. Since the previous
decision of disabling force pushes was discussed in a DISCUSS thread and
others might want to weigh in with their opinions, mind starting a DISCUSS
thread for changing this policy so others are in the know?

Agree tagging releases is good practice. I believe we do this already and
sign, but not sure if we place them under "rel/*" namespace.

On Fri, Apr 22, 2016 at 12:07 PM, Owen O'Malley  wrote:

> In my opinion, prohibiting forced updates causes more pain than it helps.
>
> The much more important part is that someone should make tags for the
> recent releases in the "rel/*" namespace so that they can't be modified.
> I'd suggest at least:
>
> release-2.6.4
> release-2.7.2
>
> .. Owen
>
> On Fri, Apr 22, 2016 at 11:29 AM, Karthik Kambatla 
> wrote:
>
> > Recent changes to branch policies have affected this. Our trunk seems
> > protected, but not branch-* branches. I had filed INFRA-11236 to address
> > this. I can't ping on the JIRA anymore because comments from
> > non-contributors are disabled.
> >
> > To prevent unintentional major messes, it is highly recommended we don't
> > force push. For JIRA ID mistakes in commit messages, we have been filing
> > another (possibly empty) commit that just says there was a mistake. e.g.
> > something along the lines of "Mistakenly committed HADOOP-13011 as
> > HADOOP-13001."
> >
> > On Thu, Apr 21, 2016 at 9:42 PM, larry mccay  wrote:
> >
> > > I believe that he squashed my attempted --amend into a single commit on
> > > branch-2.8.
> > > Not sure about trunk and branch-2.
> > >
> > > Thanks for the clarification on the formatting.
> > > I will comply in the future.
> > >
> > > For such issues, is a dev@ email first better than trying to "fix" it?
> > >
> > > Again, sorry for the inconvenience.
> > >
> > > On Fri, Apr 22, 2016 at 12:10 AM, Andrew Wang <
> andrew.w...@cloudera.com>
> > > wrote:
> > >
> > > > What does "fix" mean? We aren't supposed to force push to non-feature
> > > > branches, and actually thought this was disabled.
> > > >
> > > > Also FYI for the future, we normally format our commit messages with
> > > > periods, e.g.:
> > > >
> > > > HADOOP-13011. Clearly Document the Password Details for
> Keystore-based
> > > > Credential Providers.
> > > >
> > > >
> > > > On Thu, Apr 21, 2016 at 8:26 PM, larry mccay 
> > wrote:
> > > >
> > > > > All -
> > > > >
> > > > > My first hadoop commit for HADOOP-13011 inadvertently referenced
> the
> > > > wrong
> > > > > JIRA (HADOOP-13001) in the commit message.
> > > > >
> > > > > Owen O'Malley helped me out by fixing the history on all 3
> branches:
> > > > trunk,
> > > > > branch-2, branch-2.8. The message is correct now in the current
> > history
> > > > but
> > > > > you may need to rebase to the current history for things to align
> > > > properly.
> > > > >
> > > > > I apologize for the inconvenience.
> > > > >
> > > > > thanks,
> > > > >
> > > > > --larry
> > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: Hadoop-Common-trunk #2667

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-4846. Fix random failures for

--
[...truncated 5147 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Running org.apache.hadoop.util.TestClasspath
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.691 sec - in 
org.apache.hadoop.util.TestClasspath
Running org.apache.hadoop.util.TestApplicationClassLoader
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.458 sec - in 
org.apache.hadoop.util.TestApplicationClassLoader
Running org.apache.hadoop.util.TestShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.738 sec - in 
org.apache.hadoop.util.TestShell
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.344 sec - in 
org.apache.hadoop.util.TestShutdownHookManager
Running org.apache.hadoop.util.TestConfTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.476 sec - in 
org.apache.hadoop.util.TestConfTest
Running org.apache.hadoop.util.TestHttpExceptionUtils
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.28 sec - in 
org.apache.hadoop.util.TestHttpExceptionUtils
Running org.apache.hadoop.util.TestJarFinder
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.303 sec - in 
org.apache.hadoop.util.TestJarFinder
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.715 sec - in 
org.apache.hadoop.util.hash.TestHash
Running org.apache.hadoop.util.TestLightWeightCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.756 sec - in 
org.apache.hadoop.util.TestLightWeightCache
Running org.apache.hadoop.util.TestNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.226 sec - in 
org.apache.hadoop.util.TestNativeCodeLoader
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.131 sec - in 
org.apache.hadoop.util.TestReflectionUtils
Running org.apache.hadoop.crypto.TestCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.724 sec - 
in org.apache.hadoop.crypto.TestCryptoStreams
Running org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Tests run: 14, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 20.785 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Running org.apache.hadoop.crypto.TestOpensslCipher
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.243 sec - in 
org.apache.hadoop.crypto.TestOpensslCipher
Running org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.543 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Running org.apache.hadoop.crypto.TestCryptoStreamsNormal
Tests run: 14, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 11.783 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsNormal
Running org.apache.hadoop.crypto.TestCryptoCodec
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.771 sec - in 
org.apache.hadoop.crypto.TestCryptoCodec
Running org.apache.hadoop.crypto.random.TestOsSecureRandom
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.027 sec - in 
org.apache.hadoop.crypto.random.TestOsSecureRandom
Running org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - in 
org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Running org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.068 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Running org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.159 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Running org.apache.hadoop.crypto.key.TestCachingKeyProvider
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.423 sec - in 
org.apache.hadoop.crypto.key.TestCachingKeyProvider
Running org.apache.hadoop.crypto.key.TestKeyProviderFactory
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.792 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderFactory
Running org.apache.hadoop.crypto.key.TestValueQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.42 sec - in 
org.apache.hadoop.crypto.key.TestValueQueue
Running org.apache.hadoop.crypto.key.kms.TestLoadBalancingKMSClientProvider
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.705 sec - in 
org.apache.hadoop.crypto.key.kms.TestLoadBalancingKMSClientProvider
Running 

Build failed in Jenkins: Hadoop-common-trunk-Java8 #1381

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-4846. Fix random failures for

--
[...truncated 5574 lines...]
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.697 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.361 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.358 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoderLegacy
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.992 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoderLegacy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.583 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.946 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.TestECSchema
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec - in 
org.apache.hadoop.io.erasurecode.TestECSchema
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.052 sec - in 
org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.935 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.197 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.012 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.31 sec - in 
org.apache.hadoop.io.TestWritableUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.376 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.628 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.309 sec - in 
org.apache.hadoop.io.TestVersionedWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.877 sec - in 
org.apache.hadoop.io.TestEnumSetWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMapWritable
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.407 sec - in 
org.apache.hadoop.io.TestMapWritable

Re: Commit History Edit Alert

2016-04-22 Thread Owen O'Malley
In my opinion, prohibiting forced updates causes more pain than it helps.

The much more important part is that someone should make tags for the
recent releases in the "rel/*" namespace so that they can't be modified.
I'd suggest at least:

release-2.6.4
release-2.7.2

.. Owen

On Fri, Apr 22, 2016 at 11:29 AM, Karthik Kambatla 
wrote:

> Recent changes to branch policies have affected this. Our trunk seems
> protected, but not branch-* branches. I had filed INFRA-11236 to address
> this. I can't ping on the JIRA anymore because comments from
> non-contributors are disabled.
>
> To prevent unintentional major messes, it is highly recommended we don't
> force push. For JIRA ID mistakes in commit messages, we have been filing
> another (possibly empty) commit that just says there was a mistake. e.g.
> something along the lines of "Mistakenly committed HADOOP-13011 as
> HADOOP-13001."
>
> On Thu, Apr 21, 2016 at 9:42 PM, larry mccay  wrote:
>
> > I believe that he squashed my attempted --amend into a single commit on
> > branch-2.8.
> > Not sure about trunk and branch-2.
> >
> > Thanks for the clarification on the formatting.
> > I will comply in the future.
> >
> > For such issues, is a dev@ email first better than trying to "fix" it?
> >
> > Again, sorry for the inconvenience.
> >
> > On Fri, Apr 22, 2016 at 12:10 AM, Andrew Wang 
> > wrote:
> >
> > > What does "fix" mean? We aren't supposed to force push to non-feature
> > > branches, and actually thought this was disabled.
> > >
> > > Also FYI for the future, we normally format our commit messages with
> > > periods, e.g.:
> > >
> > > HADOOP-13011. Clearly Document the Password Details for Keystore-based
> > > Credential Providers.
> > >
> > >
> > > On Thu, Apr 21, 2016 at 8:26 PM, larry mccay 
> wrote:
> > >
> > > > All -
> > > >
> > > > My first hadoop commit for HADOOP-13011 inadvertently referenced the
> > > wrong
> > > > JIRA (HADOOP-13001) in the commit message.
> > > >
> > > > Owen O'Malley helped me out by fixing the history on all 3 branches:
> > > trunk,
> > > > branch-2, branch-2.8. The message is correct now in the current
> history
> > > but
> > > > you may need to rebase to the current history for things to align
> > > properly.
> > > >
> > > > I apologize for the inconvenience.
> > > >
> > > > thanks,
> > > >
> > > > --larry
> > > >
> > >
> >
>


Re: Commit History Edit Alert

2016-04-22 Thread Karthik Kambatla
Recent changes to branch policies have affected this. Our trunk seems
protected, but not branch-* branches. I had filed INFRA-11236 to address
this. I can't ping on the JIRA anymore because comments from
non-contributors are disabled.

To prevent unintentional major messes, it is highly recommended we don't
force push. For JIRA ID mistakes in commit messages, we have been filing
another (possibly empty) commit that just says there was a mistake. e.g.
something along the lines of "Mistakenly committed HADOOP-13011 as
HADOOP-13001."

On Thu, Apr 21, 2016 at 9:42 PM, larry mccay  wrote:

> I believe that he squashed my attempted --amend into a single commit on
> branch-2.8.
> Not sure about trunk and branch-2.
>
> Thanks for the clarification on the formatting.
> I will comply in the future.
>
> For such issues, is a dev@ email first better than trying to "fix" it?
>
> Again, sorry for the inconvenience.
>
> On Fri, Apr 22, 2016 at 12:10 AM, Andrew Wang 
> wrote:
>
> > What does "fix" mean? We aren't supposed to force push to non-feature
> > branches, and actually thought this was disabled.
> >
> > Also FYI for the future, we normally format our commit messages with
> > periods, e.g.:
> >
> > HADOOP-13011. Clearly Document the Password Details for Keystore-based
> > Credential Providers.
> >
> >
> > On Thu, Apr 21, 2016 at 8:26 PM, larry mccay  wrote:
> >
> > > All -
> > >
> > > My first hadoop commit for HADOOP-13011 inadvertently referenced the
> > wrong
> > > JIRA (HADOOP-13001) in the commit message.
> > >
> > > Owen O'Malley helped me out by fixing the history on all 3 branches:
> > trunk,
> > > branch-2, branch-2.8. The message is correct now in the current history
> > but
> > > you may need to rebase to the current history for things to align
> > properly.
> > >
> > > I apologize for the inconvenience.
> > >
> > > thanks,
> > >
> > > --larry
> > >
> >
>


Build failed in Jenkins: Hadoop-common-trunk-Java8 #1380

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[raviprak] Revert "HADOOP-12563. Updated utility (dtutil) to create/modify token

--
[...truncated 5564 lines...]
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.491 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.705 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.622 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.741 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.612 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.727 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.7 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.708 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.719 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSeek
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.025 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestQuotaUsage
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.175 sec - in 
org.apache.hadoop.fs.TestQuotaUsage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestContentSummary
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.367 sec - in 
org.apache.hadoop.fs.TestContentSummary
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestTrash
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.881 sec - in 
org.apache.hadoop.fs.TestTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFsShellTouch
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in 
org.apache.hadoop.fs.TestFsShellTouch
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestGetSpaceUsed
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec - in 
org.apache.hadoop.fs.TestGetSpaceUsed
Java 

Jenkins build is back to normal : Hadoop-Common-trunk #2666

2016-04-22 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-13055) Implement linkMergeSlash for ViewFs

2016-04-22 Thread Zhe Zhang (JIRA)
Zhe Zhang created HADOOP-13055:
--

 Summary: Implement linkMergeSlash for ViewFs
 Key: HADOOP-13055
 URL: https://issues.apache.org/jira/browse/HADOOP-13055
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, viewfs
Reporter: Zhe Zhang
Assignee: Zhe Zhang


In a multi-cluster environment it is sometimes useful to operate on the root / 
slash directory of an HDFS cluster. E.g., list all top level directories. 
Quoting the comment in {{ViewFs}}:
{code}
 *   A special case of the merge mount is where mount table's root is merged
 *   with the root (slash) of another file system:
 *   
 *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
 *   
 *   In this cases the root of the mount table is merged with the root of
 *hdfs://nn99/  
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #1379

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-9555. LazyPersistFileScrubber should still sleep if there are

--
[...truncated 5554 lines...]
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.175 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.243 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.803 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.103 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.114 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.259 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.562 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.812 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.399 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.225 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.128 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.652 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.815 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.199 sec - 
in org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.304 sec - in 

[jira] [Reopened] (HADOOP-12563) Updated utility to create/modify token files

2016-04-22 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reopened HADOOP-12563:
---

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13054) Use proto delimited IO to fix tests broken by HADOOP-12563

2016-04-22 Thread Matthew Paduano (JIRA)
Matthew Paduano created HADOOP-13054:


 Summary: Use proto delimited IO to fix tests broken by HADOOP-12563
 Key: HADOOP-13054
 URL: https://issues.apache.org/jira/browse/HADOOP-13054
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Matthew Paduano
Assignee: Matthew Paduano


HADOOP-12563 broke some unittests 
(see that ticket, in comments).  Switching the proto buffer read/write methods 
to "writeDelimitedTo" and "readDelimitedFrom" seems to fix things up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #2665

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-9555. LazyPersistFileScrubber should still sleep if there are

--
[...truncated 5147 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec - in 
org.apache.hadoop.test.TestGenericTestUtils
Running org.apache.hadoop.test.TestTimedOutTestsListener
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.231 sec - in 
org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.metrics.TestMetricsServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.102 sec - in 
org.apache.hadoop.metrics.TestMetricsServlet
Running org.apache.hadoop.metrics.spi.TestOutputRecord
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec - in 
org.apache.hadoop.metrics.spi.TestOutputRecord
Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec - in 
org.apache.hadoop.metrics.ganglia.TestGangliaContext
Running org.apache.hadoop.net.TestNetUtils
Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.802 sec - in 
org.apache.hadoop.net.TestNetUtils
Running org.apache.hadoop.net.TestDNS
Tests run: 12, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.392 sec <<< 
FAILURE! - in org.apache.hadoop.net.TestDNS
testNullDnsServer(org.apache.hadoop.net.TestDNS)  Time elapsed: 0.088 sec  <<< 
FAILURE!
java.lang.AssertionError: 
Expected: is "localhost"
 but: was "asf906.gq1.ygridcore.net"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at org.apache.hadoop.net.TestDNS.testNullDnsServer(TestDNS.java:124)

testDefaultDnsServer(org.apache.hadoop.net.TestDNS)  Time elapsed: 0.011 sec  
<<< FAILURE!
java.lang.AssertionError: 
Expected: is "localhost"
 but: was "asf906.gq1.ygridcore.net"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at org.apache.hadoop.net.TestDNS.testDefaultDnsServer(TestDNS.java:134)

Running org.apache.hadoop.net.TestSocketIOWithTimeout
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.425 sec - in 
org.apache.hadoop.net.TestSocketIOWithTimeout
Running org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.2 sec - in 
org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Running org.apache.hadoop.net.TestClusterTopology
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.234 sec - in 
org.apache.hadoop.net.TestClusterTopology
Running org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.091 sec - in 
org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Running org.apache.hadoop.net.TestTableMapping
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.765 sec - in 
org.apache.hadoop.net.TestTableMapping
Running org.apache.hadoop.net.TestScriptBasedMapping
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.683 sec - in 
org.apache.hadoop.net.TestScriptBasedMapping
Running org.apache.hadoop.net.unix.TestDomainSocketWatcher
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.632 sec - in 
org.apache.hadoop.net.unix.TestDomainSocketWatcher
Running org.apache.hadoop.net.unix.TestDomainSocket
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.911 sec - in 
org.apache.hadoop.net.unix.TestDomainSocket
Running org.apache.hadoop.net.TestSwitchMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.454 sec - in 
org.apache.hadoop.net.TestSwitchMapping
Running org.apache.hadoop.net.TestStaticMapping
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.805 sec - in 
org.apache.hadoop.net.TestStaticMapping
Running org.apache.hadoop.cli.TestCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.671 sec - in 
org.apache.hadoop.cli.TestCLI
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec - in 
org.apache.hadoop.io.TestSortedMapWritable
Running org.apache.hadoop.io.TestIOUtils
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.416 sec - in 
org.apache.hadoop.io.TestIOUtils
Running org.apache.hadoop.io.TestSequenceFile
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.627 sec - in 
org.apache.hadoop.io.TestSequenceFile
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.497 sec - in 
org.apache.hadoop.io.TestEnumSetWritable
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, 

[jira] [Created] (HADOOP-13053) FS Shell should use File system API, not FileContext

2016-04-22 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-13053:


 Summary: FS Shell should use File system API, not FileContext
 Key: HADOOP-13053
 URL: https://issues.apache.org/jira/browse/HADOOP-13053
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger


FS Shell is File System based, but it is using the FileContext API. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13052) ChecksumFileSystem mishandles crc file permissions

2016-04-22 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-13052:


 Summary: ChecksumFileSystem mishandles crc file permissions
 Key: HADOOP-13052
 URL: https://issues.apache.org/jira/browse/HADOOP-13052
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


CheckFileSystem does not override permission related calls to apply those 
operations to the hidden crc files.  Clients may be unable to read the crcs if 
the file is created with strict permissions and then relaxed.

The checksum fs is designed to work with or w/o crcs present, so it silently 
ignores FNF exceptions.  The java file stream apis unfortunately may only throw 
FNF, so permission denied becomes FNF resulting in this bug going silently 
unnoticed.

(Problem discovered via public localizer.  Files are downloaded as 
user-readonly and then relaxed to all-read.  The crc remains user-readonly)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13051) Test for special characters in path being respected during globPaths

2016-04-22 Thread Harsh J (JIRA)
Harsh J created HADOOP-13051:


 Summary: Test for special characters in path being respected 
during globPaths
 Key: HADOOP-13051
 URL: https://issues.apache.org/jira/browse/HADOOP-13051
 Project: Hadoop Common
  Issue Type: Test
  Components: fs
Affects Versions: 3.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor


On {{branch-2}}, the below is the (incorrect) behaviour today, where paths with 
special characters get dropped during globStatus calls:

{code}
bin/hdfs dfs -mkdir /foo
bin/hdfs dfs -touchz /foo/foo1
bin/hdfs dfs -touchz $'/foo/foo1\r'
bin/hdfs dfs -ls '/foo/*'
-rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
-rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
bin/hdfs dfs -ls '/foo/*'
-rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
{code}

Whereas trunk has the right behaviour, subtly fixed via the pattern library 
change of HADOOP-12436:

{code}
bin/hdfs dfs -mkdir /foo
bin/hdfs dfs -touchz /foo/foo1
bin/hdfs dfs -touchz $'/foo/foo1\r'
bin/hdfs dfs -ls '/foo/*'
-rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
-rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
bin/hdfs dfs -ls '/foo/*'
-rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
-rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
{code}

(I've placed a ^M explicitly to indicate presence of the intentional hidden 
character)

We should still add a simple test-case to cover this situation for future 
regressions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1378

2016-04-22 Thread Apache Jenkins Server
See 



Re: Commit History Edit Alert

2016-04-22 Thread larry mccay
Thanks for the clarification, Andrew.
Yes, I'll add a comment about the results of my "testing". :)


On Fri, Apr 22, 2016 at 2:06 AM, Andrew Wang 
wrote:

> Squashing means force pushing, so please don't do that per ASF policies.
> The normal recommendation is just not to fix it, commit message typos
> aren't that a big deal. What I do is leave a comment on the JIRA to make it
> easier for people to track down the commit.
>
> I found INFRA-11136 where we supposedly protected trunk and also
> INFRA-11236 about getting this in place for the branch-Xs. Larry, could you
> update INFRA-11236 with your empirical testing? Would be good to get these
> branches protected again for the future.
>
> Thanks,
> Andrew
>
>
> On Thu, Apr 21, 2016 at 9:42 PM, larry mccay  wrote:
>
> > I believe that he squashed my attempted --amend into a single commit on
> > branch-2.8.
> > Not sure about trunk and branch-2.
> >
> > Thanks for the clarification on the formatting.
> > I will comply in the future.
> >
> > For such issues, is a dev@ email first better than trying to "fix" it?
> >
> > Again, sorry for the inconvenience.
> >
> > On Fri, Apr 22, 2016 at 12:10 AM, Andrew Wang 
> > wrote:
> >
> > > What does "fix" mean? We aren't supposed to force push to non-feature
> > > branches, and actually thought this was disabled.
> > >
> > > Also FYI for the future, we normally format our commit messages with
> > > periods, e.g.:
> > >
> > > HADOOP-13011. Clearly Document the Password Details for Keystore-based
> > > Credential Providers.
> > >
> > >
> > > On Thu, Apr 21, 2016 at 8:26 PM, larry mccay 
> wrote:
> > >
> > > > All -
> > > >
> > > > My first hadoop commit for HADOOP-13011 inadvertently referenced the
> > > wrong
> > > > JIRA (HADOOP-13001) in the commit message.
> > > >
> > > > Owen O'Malley helped me out by fixing the history on all 3 branches:
> > > trunk,
> > > > branch-2, branch-2.8. The message is correct now in the current
> history
> > > but
> > > > you may need to rebase to the current history for things to align
> > > properly.
> > > >
> > > > I apologize for the inconvenience.
> > > >
> > > > thanks,
> > > >
> > > > --larry
> > > >
> > >
> >
>


[jira] [Created] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-04-22 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13050:
---

 Summary: Upgrade to AWS SDK 10.10+ for Java 8u60+
 Key: HADOOP-13050
 URL: https://issues.apache.org/jira/browse/HADOOP-13050
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs/s3
Affects Versions: 2.7.2
Reporter: Steve Loughran


HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
work on open jdk >= 8u60, because a change in the JDK broke the version of Joda 
time that AWS uses.

Fix, update the JDK. Though, that implies updating http components: 
HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Slack channel for Hadoop developers

2016-04-22 Thread Tsuyoshi Ozawa
Hi,

I created slack channel for Hadoop community unofficially and experimentally:
https://hadoopdev.slack.com/

I know that there is IRC channel and it's good to log. However, Slack
is very also good tool to join easily and have a communication
interactively. It will be also useful to join following kind of meetup
remotely:
http://www.meetup.com/Hadoop-Contributors/events/230495682/?eventId=230495682

Please let me know if you find any trouble or problem.

To join the slack, please register your email address from here:
https://hadoopdev-invitation.herokuapp.com/

Thanks,
- Tsuyoshi


Re: Commit History Edit Alert

2016-04-22 Thread Andrew Wang
Squashing means force pushing, so please don't do that per ASF policies.
The normal recommendation is just not to fix it, commit message typos
aren't that a big deal. What I do is leave a comment on the JIRA to make it
easier for people to track down the commit.

I found INFRA-11136 where we supposedly protected trunk and also
INFRA-11236 about getting this in place for the branch-Xs. Larry, could you
update INFRA-11236 with your empirical testing? Would be good to get these
branches protected again for the future.

Thanks,
Andrew


On Thu, Apr 21, 2016 at 9:42 PM, larry mccay  wrote:

> I believe that he squashed my attempted --amend into a single commit on
> branch-2.8.
> Not sure about trunk and branch-2.
>
> Thanks for the clarification on the formatting.
> I will comply in the future.
>
> For such issues, is a dev@ email first better than trying to "fix" it?
>
> Again, sorry for the inconvenience.
>
> On Fri, Apr 22, 2016 at 12:10 AM, Andrew Wang 
> wrote:
>
> > What does "fix" mean? We aren't supposed to force push to non-feature
> > branches, and actually thought this was disabled.
> >
> > Also FYI for the future, we normally format our commit messages with
> > periods, e.g.:
> >
> > HADOOP-13011. Clearly Document the Password Details for Keystore-based
> > Credential Providers.
> >
> >
> > On Thu, Apr 21, 2016 at 8:26 PM, larry mccay  wrote:
> >
> > > All -
> > >
> > > My first hadoop commit for HADOOP-13011 inadvertently referenced the
> > wrong
> > > JIRA (HADOOP-13001) in the commit message.
> > >
> > > Owen O'Malley helped me out by fixing the history on all 3 branches:
> > trunk,
> > > branch-2, branch-2.8. The message is correct now in the current history
> > but
> > > you may need to rebase to the current history for things to align
> > properly.
> > >
> > > I apologize for the inconvenience.
> > >
> > > thanks,
> > >
> > > --larry
> > >
> >
>