Re: Jira Lock Down Upgraded?

2016-05-12 Thread Xiao Chen
FYI - I pinged infra but the answer is only people with
PMC/committer/contributor *Roles* on jira can comment. Hadoop's contributor
list seems to be reached it's limit, so cannot be added.

Hopefully we can restore after said May 12 2300 UTC.

Maybe we should increase the contributor limit or add some kind of
whitelist, so this spam counter measurement doesn't do too much collateral
damage on regular work

-Xiao

On Thu, May 12, 2016 at 7:46 PM, Ted Yu  wrote:

> Looks like side effects of this lock down are:
>
> 1. person (non-admin) who logged JIRA cannot comment on the JIRA
> 2. result of QA run cannot be posted onto JIRA (at least for hbase tests)
>
> :-(
>
> On Thu, May 12, 2016 at 3:10 PM, Andrew Wang 
> wrote:
>
>> Try asking on infra.chat (Apache INFRA's hipchat). I was in that room
>> earlier today, and they were working on the ongoing JIRA spam.
>>
>> On Thu, May 12, 2016 at 3:03 PM, Xiao Chen  wrote:
>>
>> > Hello,
>> >
>> > I'm not sure if common-dev is the right contact list, please redirect
>> me if
>> > not.
>> >
>> > It seems the jira lock down is somehow being more strict?
>> > I was able to comment on an HDFS jira
>> > <
>> >
>> https://issues.apache.org/jira/browse/HDFS-4210?focusedCommentId=15282111=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15282111
>> > >
>> > at around 14:45 PDT today, but now I cannot.
>> >
>> > The banner still says:
>> >
>> > Jira is in Temporary Lockdown mode as a spam countermeasure. Only
>> logged-in
>> > users with active roles (committer, contributor, PMC, etc.) will be
>> able to
>> > create issues or comments during this time. Lockdown period from 11 May
>> > 2300 UTC to estimated 12 May 2300 UTC.
>> >
>> >
>> > But with a quick check with Yongjun and Yufei, contributors are locked
>> down
>> > as well.
>> >
>> > Thanks,
>> > -Xiao
>> >
>>
>
>


Re: Jira Lock Down Upgraded?

2016-05-12 Thread Ted Yu
Looks like side effects of this lock down are:

1. person (non-admin) who logged JIRA cannot comment on the JIRA
2. result of QA run cannot be posted onto JIRA (at least for hbase tests)

:-(

On Thu, May 12, 2016 at 3:10 PM, Andrew Wang 
wrote:

> Try asking on infra.chat (Apache INFRA's hipchat). I was in that room
> earlier today, and they were working on the ongoing JIRA spam.
>
> On Thu, May 12, 2016 at 3:03 PM, Xiao Chen  wrote:
>
> > Hello,
> >
> > I'm not sure if common-dev is the right contact list, please redirect me
> if
> > not.
> >
> > It seems the jira lock down is somehow being more strict?
> > I was able to comment on an HDFS jira
> > <
> >
> https://issues.apache.org/jira/browse/HDFS-4210?focusedCommentId=15282111=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15282111
> > >
> > at around 14:45 PDT today, but now I cannot.
> >
> > The banner still says:
> >
> > Jira is in Temporary Lockdown mode as a spam countermeasure. Only
> logged-in
> > users with active roles (committer, contributor, PMC, etc.) will be able
> to
> > create issues or comments during this time. Lockdown period from 11 May
> > 2300 UTC to estimated 12 May 2300 UTC.
> >
> >
> > But with a quick check with Yongjun and Yufei, contributors are locked
> down
> > as well.
> >
> > Thanks,
> > -Xiao
> >
>


Build failed in Jenkins: Hadoop-common-trunk-Java8 #1468

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[wang] HADOOP-13142. Change project version from 3.0.0 to 3.0.0-alpha1.

--
[...truncated 5581 lines...]
Running org.apache.hadoop.util.TestChunkedArrayList
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.856 sec - in 
org.apache.hadoop.util.TestChunkedArrayList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.259 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.8 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.101 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.131 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.232 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.597 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.885 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.45 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.829 sec - in 
org.apache.hadoop.util.TestLightWeightCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.202 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.698 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.193 sec - 

[jira] [Resolved] (HADOOP-13142) Change project version from 3.0.0 to 3.0.0-alpha1

2016-05-12 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-13142.
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1

Thanks for the review Colin, committed to trunk.

> Change project version from 3.0.0 to 3.0.0-alpha1
> -
>
> Key: HADOOP-13142
> URL: https://issues.apache.org/jira/browse/HADOOP-13142
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: hadoop-13142.001.patch
>
>
> We want to rename 3.0.0 to 3.0.0-alpha1 for the first alpha release. However, 
> the version number is also encoded outside of the pom.xml's, so we need to 
> update these too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge feature branch HADOOP-12930

2016-05-12 Thread Andrew Wang
+1. I looked at the patches on the branch, wasn't too bad to review. As
Allen said, there's some code movement, assorted other nice doc and shell
fixups.

Found one extra typo, which I added to HADOOP-13129.

Best,
Andrew

On Wed, May 11, 2016 at 1:14 AM, Sean Busbey  wrote:

> +1 (non-binding)
>
> reviewed everything, filed an additional subtask for a very trivial
> typo in the docs. should be fine to make a full issue after close and
> then fix.
>
> tried merging locally, tried running through new shell tests (both
> with and without bats installed), tried making an example custom
> command (valid and malformed). everything looks great.
>
> On Mon, May 9, 2016 at 1:26 PM, Allen Wittenauer  wrote:
> >
> > Hey gang!
> >
> > I’d like to call a vote to run for 7 days (ending May 16 at
> 13:30 PT) to merge the HADOOP-12930 feature branch into trunk. This branch
> was developed exclusively by me as per the discussion two months ago as a
> way to make what would be a rather large patch hopefully easier to review.
> The vast majority of the branch is code movement in the same file,
> additional license headers, maven assembly hooks for distribution, and
> variable renames. Not a whole lot of new code, but a big diff file
> none-the-less.
> >
> > This branch modifies the ‘hadoop’, ‘hdfs’, ‘mapred’, and ‘yarn’
> commands to allow for subcommands to be added or modified at runtime.  This
> allows for individual users or entire sites to tweak the execution
> environment to suit their local needs.  For example, it has been a practice
> for some locations to change the distcp jar out for a custom one.  Using
> this functionality, it is possible that the ‘hadoop distcp’ command could
> run the local version without overwriting the bundled jar and for existing
> documentation (read: results from Internet searches) to work as written
> without modification. This has the potential to be a huge win, especially
> for:
> >
> > * advanced end users looking to supplement the Apache
> Hadoop experience
> > * operations teams that may be able to leverage existing
> documentation without having to remain local “exception” docs
> > * development groups wanting an easy way to trial
> experimental features
> >
> > Additionally, this branch includes the following, related
> changes:
> >
> > * Adds the first unit tests for the ‘hadoop’ command
> > * Adds the infrastructure for hdfs script testing and
> the first unit test for the ‘hdfs’ command
> > * Modifies the hadoop-tools components to be dynamic
> rather than hard coded
> > * Renames the shell profiles for hdfs, mapred, and yarn
> to be consistent with other bundled profiles, including the ones introduced
> in this branch
> >
> > Documentation, including a ‘hello world’-style example, is in
> the UnixShellGuide markdown file.  (Of course!)
> >
> >  I am at ApacheCon this week if anyone wants to discuss in-depth.
> >
> > Thanks!
> >
> > P.S.,
> >
> > There are still two open sub-tasks.  These are blocked by other
> issues so that we may add unit testing to the shell code in those
> respective areas.  I’ll covert to full issues after HADOOP-12930 is closed.
> >
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
>
>
>
> --
> busbey
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: Compatibility guidelines for toString overrides

2016-05-12 Thread Raymie Stata
How about this idea:

Define a new annotation "StableImplUnstableInterface" which means consumers 
can't assume stability but producers can't change things. Mark all toStrings 
with this annotation. 

Then, in a lazy fashion, as the need arises to change various toString methods, 
diligence can be done first to see if any legacy code depends on a method in a 
compatibility-breaking manner, those dependencies can be fixed, then the method 
is changed and remarked as unstable.  Conversely, there might be circumstances 
where a toString method might be marked as stable.  (Certainly it's reasonable 
to assume the Integer.toString returns a parsable result, for example, the 
point being that for some classes it makes sense to have a stable spec for 
toString).  

Over the years one would hope that the StableImplUnstableSpec annotations would 
disappear. 

Sent from my iPhone

> On May 12, 2016, at 1:40 PM, Sean Busbey  wrote:
> 
> As a downstream user of Hadoop, it would be much clearer if the
> toString functions included the appropriate annotations to say they're
> non-public, evolving, or whatever.
> 
> Most downstream users of Hadoop aren't going to remember in-detail
> exceptions to the java API compatibility rules, once they see that a
> class is labeled Public/Stable, they're going to presume that applies
> to all non-private members.
> 
>> On Thu, May 12, 2016 at 9:32 AM, Colin McCabe  wrote:
>> Hi all,
>> 
>> Recently a discussion came up on HADOOP-13028 about the wisdom of
>> overloading S3AInputStream#toString to output statistics information.
>> It's a difficult judgement for me to make, since I'm not aware of any
>> compatibility guidelines for InputStream#toString.  Do we have
>> compatibility guidelines for toString functions?
>> 
>> It seems like the output of toString functions is usually used as a
>> debugging aid, rather than as a stable format suitable for UI display or
>> object serialization.  Clearly, there are a few cases where we might
>> want to specifically declare that a toString method is a stable API.
>> However, I think if we attempt to treat the toString output of all
>> public classes as stable, we will have greatly increased the API
>> surface.  Should we formalize this and declare that toString functions
>> are @Unstable, Evolving unless declared otherwise?
>> 
>> best,
>> Colin
>> 
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 
> 
> 
> -- 
> busbey
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Looking to a Hadoop 3 release

2016-05-12 Thread Karthik Kambatla
I am with Vinod on avoiding merging mostly_complete_branches to trunk since
we are not shipping any release off it. If 3.x releases going off of trunk
is going to help with this, I am fine with that approach. We should still
make sure to keep trunk-incompat small and not include large features.

On Sat, Apr 23, 2016 at 6:53 PM, Chris Douglas  wrote:

> If we're not starting branch-3/trunk, what would distinguish it from
> trunk/trunk-incompat? Is it the same mechanism with different labels?
>
> That may be a reasonable strategy when we create branch-3, as a
> release branch for beta. Releasing 3.x from trunk will help us figure
> out which incompatibilities can be called out in an upgrade guide
> (e.g., "new feature X is incompatible with uncommon configuration Y")
> and which require code changes (e.g., "data loss upgrading a cluster
> with feature X"). Given how long trunk has been unreleased, we need
> more data from deployments to triage. How to manage transitions
> between major versions will always be case-by-case; consensus on how
> we'll address generic incompatible changes is not saving any work.
>
> Once created, removing functionality from branch-3 (leaving it in
> trunk) _because_ nobody volunteers cycles to address urgent
> compatibility issues is fair. It's also more workable than asking that
> features be committed to a branch that we have no plan to release,
> even as alpha. -C
>
> On Fri, Apr 22, 2016 at 6:50 PM, Vinod Kumar Vavilapalli
>  wrote:
> > Tx for your replies, Andrew.
> >
> >>> For exit criteria, how about we time box it? My plan was to do monthly
> >> alphas through the summer, leading up to beta in late August / early
> Sep.
> >> At that point we freeze and stabilize for GA in Nov/Dec.
> >
> >
> > Time-boxing is a reasonable exit-criterion.
> >
> >
> >> In this case, does trunk-incompat essentially become the new trunk? Or
> are
> >> we treating trunk-incompat as a feature branch, which periodically
> merges
> >> changes from trunk?
> >
> >
> > It’s the later. Essentially
> >  - trunk-incompat = trunk + only incompatible changes, periodically kept
> up-to-date to trunk
> >  - trunk is always ready to ship
> >  - and no compatible code gets left behind
> >
> > The reason for my proposal like this is to address the tension between
> “there is lot of compatible code in trunk that we are not shipping” and
> “don’t ship trunk, it has incompatibilities”. With this, we will not have
> (compatible) code not getting shipped to users.
> >
> > Obviously, we can forget about all of my proposal completely if everyone
> puts in all compatible code into branch-2 / branch-3 or whatever the main
> releasable branch is. This didn’t work in practice, have seen this not
> happening prominently during 0.21, and now 3.x.
> >
> > There is another related issue - "my feature is nearly ready, so I’ll
> just merge it into trunk as we don’t release that anyways, but not the
> current releasable branch - I’m lazy to fix the last few stability related
> issues”. With this, we will (should) get more disciplined, take feature
> stability on a branch seriously and merge a feature branch only when it is
> truly ready!
> >
> >> For 3.x, my strawman was to release off trunk for the alphas, then
> branch a
> >> branch-3 for the beta and onwards.
> >
> >
> > Repeating above, I’m proposing continuing to make GA 3.x releases also
> off of trunk! This way only incompatible changes don’t get shipped to users
> - by design! Eventually, trunk-incompat will be latest 3.x GA + enough
> incompatible code to warrant a 4.x, 5.x etc.
> >
> > +Vinod
>


Jenkins build is back to normal : Hadoop-Common-trunk #2756

2016-05-12 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1467

2016-05-12 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13142) Change project version from 3.0.0 to 3.0.0-alpha1

2016-05-12 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-13142:


 Summary: Change project version from 3.0.0 to 3.0.0-alpha1
 Key: HADOOP-13142
 URL: https://issues.apache.org/jira/browse/HADOOP-13142
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Blocker


We want to rename 3.0.0 to 3.0.0-alpha1 for the first alpha release. However, 
the version number is also encoded outside of the pom.xml's, so we need to 
update these too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9292) NullPointerException in httpfs tests

2016-05-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9292.
--
  Resolution: Cannot Reproduce
Target Version/s:   (was: )

Stale.

> NullPointerException in httpfs tests
> 
>
> Key: HADOOP-9292
> URL: https://issues.apache.org/jira/browse/HADOOP-9292
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Vadim Bondarev
>
> testDelegationTokenWithWebhdfsFileSystem(org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos)
> testOperation[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
> testOperationDoAs[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
> problem in method JsonUtil.toFileStatus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12093) pre-patch findbugs fails on a branch-based pre-commit runs

2016-05-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12093.
---
Resolution: Cannot Reproduce

> pre-patch findbugs fails on a branch-based pre-commit runs
> --
>
> Key: HADOOP-12093
> URL: https://issues.apache.org/jira/browse/HADOOP-12093
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>
> On our branch development JIRAs (YARN-2928), we are starting to see pre-patch 
> findbugs checks fail consistently. The relevant message:
> {noformat}
> findbugs baseline for 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build
>   Running findbugs in 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
> /home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
> -DskipTests -DhadoopPatchProcess > 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/YARN-2928FindBugsOutputhadoop-yarn-server-timelineservice.txt
>  2>&1
> Exception in thread "main" java.io.FileNotFoundException: 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/YARN-2928FindbugsWarningshadoop-yarn-server-timelineservice.xml
>  (No such file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at 
> edu.umd.cs.findbugs.SortedBugCollection.progessMonitoredInputStream(SortedBugCollection.java:1231)
>   at 
> edu.umd.cs.findbugs.SortedBugCollection.readXML(SortedBugCollection.java:308)
>   at 
> edu.umd.cs.findbugs.SortedBugCollection.readXML(SortedBugCollection.java:295)
>   at edu.umd.cs.findbugs.workflow.Filter.main(Filter.java:712)
> Pre-patch YARN-2928 findbugs is broken?
> {noformat}
> See YARN-3706 and YARN-3792 for instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9837) Hadoop Token Command

2016-05-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9837.
--
Resolution: Implemented

This looks like a dupe of HADOOP-12563.

Closing as such.

> Hadoop Token Command
> 
>
> Key: HADOOP-9837
> URL: https://issues.apache.org/jira/browse/HADOOP-9837
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>  Labels: Rhino
>
> This JIRA is to define commands for Hadoop token. The scope of this task is 
> highlighted as following:
> • Token init: authenticate and request an identity token, then persist 
> the token in token cache for later reuse.
> • Token display: show the existing token with its info and attributes in 
> the token cache.
> • Token revoke: revoke a token so that the token will no longer be valid 
> and cannot be used later.
> • Token renew: extend the lifecycle of a token before it’s expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13141) Mini HDFS Cluster fails to start on trunk

2016-05-12 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-13141.
--
Resolution: Fixed

> Mini HDFS Cluster fails to start on trunk
> -
>
> Key: HADOOP-13141
> URL: https://issues.apache.org/jira/browse/HADOOP-13141
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>
> It's been noticed that Mini HDFS Cluster fails to start on trunk, blocking 
> unit tests and Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13141) Mini HDFS Cluster fails to start on trunk

2016-05-12 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HADOOP-13141:
--

 Summary: Mini HDFS Cluster fails to start on trunk
 Key: HADOOP-13141
 URL: https://issues.apache.org/jira/browse/HADOOP-13141
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaobing Zhou


It's been noticed that Mini HDFS Cluster fails to start on trunk, blocking unit 
tests and Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-common-trunk-Java8 #1466

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[jlowe] YARN-5053. More informative diagnostics when applications killed by a

[aw] HADOOP-12581. ShellBasedIdMapping needs suport for Solaris (Alan

--
[...truncated 3876 lines...]
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.631 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.937 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-minikdc ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc ---
[INFO] 
Loading source files for package org.apache.hadoop.minikdc...
Constructing Javadoc information...
Standard Doclet version 1.8.0
Building tree for all the packages and classes...
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Re: Jira Lock Down Upgraded?

2016-05-12 Thread Andrew Wang
Try asking on infra.chat (Apache INFRA's hipchat). I was in that room
earlier today, and they were working on the ongoing JIRA spam.

On Thu, May 12, 2016 at 3:03 PM, Xiao Chen  wrote:

> Hello,
>
> I'm not sure if common-dev is the right contact list, please redirect me if
> not.
>
> It seems the jira lock down is somehow being more strict?
> I was able to comment on an HDFS jira
> <
> https://issues.apache.org/jira/browse/HDFS-4210?focusedCommentId=15282111=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15282111
> >
> at around 14:45 PDT today, but now I cannot.
>
> The banner still says:
>
> Jira is in Temporary Lockdown mode as a spam countermeasure. Only logged-in
> users with active roles (committer, contributor, PMC, etc.) will be able to
> create issues or comments during this time. Lockdown period from 11 May
> 2300 UTC to estimated 12 May 2300 UTC.
>
>
> But with a quick check with Yongjun and Yufei, contributors are locked down
> as well.
>
> Thanks,
> -Xiao
>


[jira] [Resolved] (HADOOP-12435) Findbugs is broken on trunk

2016-05-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12435.
---
  Resolution: Won't Fix
Target Version/s:   (was: )

> Findbugs is broken on trunk
> ---
>
> Key: HADOOP-12435
> URL: https://issues.apache.org/jira/browse/HADOOP-12435
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Mit Desai
>
> Here is the stacktrace of the failure.
> {noformat}
> 
> 
>  Pre-patch trunk Java verification
> 
> 
> Compiling /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build
> /home/jenkins/tools/maven/latest/bin/mvn clean test -DskipTests 
> -DhadoopPatchProcess -Ptest-patch > 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/trunkJavacWarnings.txt
>  2>&1
> Javadoc'ing /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build
> /home/jenkins/tools/maven/latest/bin/mvn clean test javadoc:javadoc 
> -DskipTests -Pdocs -DhadoopPatchProcess > 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/trunkJavadocWarnings.txt
>  2>&1
> Patch does not appear to need site tests.
> findbugs baseline for 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build
>   Running findbugs in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client
> /home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
> -DskipTests -DhadoopPatchProcess > 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/trunkFindBugsOutputhadoop-yarn-client.txt
>  2>&1
> Exception in thread "main" java.io.FileNotFoundException: 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/trunkFindbugsWarningshadoop-yarn-client.xml
>  (No such file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at 
> edu.umd.cs.findbugs.SortedBugCollection.progessMonitoredInputStream(SortedBugCollection.java:1231)
>   at 
> edu.umd.cs.findbugs.SortedBugCollection.readXML(SortedBugCollection.java:308)
>   at 
> edu.umd.cs.findbugs.SortedBugCollection.readXML(SortedBugCollection.java:295)
>   at edu.umd.cs.findbugs.workflow.Filter.main(Filter.java:712)
>   Running findbugs in 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy
> /home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
> -DskipTests -DhadoopPatchProcess > 
> /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/trunkFindBugsOutputhadoop-yarn-server-web-proxy.txt
>  2>&1
> Pre-patch trunk findbugs is broken?
> Elapsed time:  16m  1s
> {noformat}
> Link to the console output of the run. Seen in multiple places.
> 1. https://builds.apache.org/job/PreCommit-YARN-Build/9243/console
> 2. https://builds.apache.org/job/PreCommit-YARN-Build/9241/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8497) Shell needs a way to list amount of physical consumed space in a directory

2016-05-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8497.
--
  Resolution: Duplicate
Target Version/s:   (was: )

> Shell needs a way to list amount of physical consumed space in a directory
> --
>
> Key: HADOOP-8497
> URL: https://issues.apache.org/jira/browse/HADOOP-8497
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 1.0.3, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Andy Isaacson
>
> Currently, there is no way to see the physical consumed space for a 
> directory. du lists the logical (pre-replication) space, and "fs -count" only 
> displays the consumed space when a quota is set. This makes it hard for 
> administrators to set a quota on a directory, since they have no way to 
> determine a reasonable value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Jira Lock Down Upgraded?

2016-05-12 Thread Xiao Chen
Hello,

I'm not sure if common-dev is the right contact list, please redirect me if
not.

It seems the jira lock down is somehow being more strict?
I was able to comment on an HDFS jira

at around 14:45 PDT today, but now I cannot.

The banner still says:

Jira is in Temporary Lockdown mode as a spam countermeasure. Only logged-in
users with active roles (committer, contributor, PMC, etc.) will be able to
create issues or comments during this time. Lockdown period from 11 May
2300 UTC to estimated 12 May 2300 UTC.


But with a quick check with Yongjun and Yufei, contributors are locked down
as well.

Thanks,
-Xiao


[jira] [Resolved] (HADOOP-8854) Document backward incompatible changes between hadoop-1.x and 2.x

2016-05-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8854.
--
Resolution: Won't Fix

a) This is stale
b) The Incompatible flag in JIRA + Apache Yetus releasedocmaker automatically 
does this work

Closing as Won't Fix.

> Document backward incompatible changes between hadoop-1.x and 2.x
> -
>
> Key: HADOOP-8854
> URL: https://issues.apache.org/jira/browse/HADOOP-8854
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Arpit Gupta
>Assignee: Suresh Srinivas
>
> We should create a new site document to explicitly list down the know 
> incompatible changes between hadoop 1.x and 2.x
> I believe this will make it easier for users to determine all the changes one 
> needs to make when moving from 1.x to 2.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Compatibility guidelines for toString overrides

2016-05-12 Thread Sean Busbey
As a downstream user of Hadoop, it would be much clearer if the
toString functions included the appropriate annotations to say they're
non-public, evolving, or whatever.

Most downstream users of Hadoop aren't going to remember in-detail
exceptions to the java API compatibility rules, once they see that a
class is labeled Public/Stable, they're going to presume that applies
to all non-private members.

On Thu, May 12, 2016 at 9:32 AM, Colin McCabe  wrote:
> Hi all,
>
> Recently a discussion came up on HADOOP-13028 about the wisdom of
> overloading S3AInputStream#toString to output statistics information.
> It's a difficult judgement for me to make, since I'm not aware of any
> compatibility guidelines for InputStream#toString.  Do we have
> compatibility guidelines for toString functions?
>
> It seems like the output of toString functions is usually used as a
> debugging aid, rather than as a stable format suitable for UI display or
> object serialization.  Clearly, there are a few cases where we might
> want to specifically declare that a toString method is a stable API.
> However, I think if we attempt to treat the toString output of all
> public classes as stable, we will have greatly increased the API
> surface.  Should we formalize this and declare that toString functions
> are @Unstable, Evolving unless declared otherwise?
>
> best,
> Colin
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>



-- 
busbey

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Common-trunk #2754

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13028 add low level counter metrics for S3A; use in read

--
[...truncated 5162 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.934 sec - in 
org.apache.hadoop.security.TestKDiagNoKDC
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.056 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.731 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.694 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.091 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.521 sec - in 
org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Running org.apache.hadoop.security.http.TestXFrameOptionsFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.367 sec - in 
org.apache.hadoop.security.http.TestXFrameOptionsFilter
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.452 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.668 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.569 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.563 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.083 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.243 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.25 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.535 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.229 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.624 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.688 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.645 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.token.TestDtUtilShell
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.625 sec - in 
org.apache.hadoop.security.token.TestToken
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.692 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.346 sec - in 
org.apache.hadoop.security.token.TestDtUtilShell
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.893 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.487 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Running 

Re: Compatibility guidelines for toString overrides

2016-05-12 Thread Steve Loughran

> On 12 May 2016, at 17:57, Chris Nauroth  wrote:
> 
> I'm in favor of making a statement in the compatibility guidelines that
> there is no guarantee of stability or backwards-compatibility for
> toString() output.  If a proposed patch introduces new dependencies on a
> stable toString() output, such as for UI display or object serialization,
> then I'd consider -1'ing it and instead asking that the logic move to a
> different method that can provide that guarantee, i.e. toStableString().
> There are further comments justifying this on HADOOP-13028 and HDFS-9732.
> 
> --Chris Nauroth
> 


+1 


- We now need to be rigorous about using a specific method for those use cases 
where we offer guarantees of stability. Maybe making this an interface 
"StableString" with the relevant method can help manage this
- we're going to need to see where there is existing tool code which logs 
toString() values and move them to explicitly stable code. How best to do that?

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Compatibility guidelines for toString overrides

2016-05-12 Thread Chris Nauroth
Hello Allen,

The intent is not for this rule to override other compatibility rules,
such as the important CLI output rule.  It's also not intended to give us
free reign to change existing toString() implementations without due
diligence.  If a patch changes an existing toString() implementation that
already goes out to the shell or any other form of external serialization,
then the patch needs to be declined.  (I concur that relying on toString()
like this should never be done, and I'd encourage us to fix any
occurrences we find.)

Instead, the intent is to advertise to Java API consumers that toString()
output may evolve freely, and therefore we recommend against writing Java
code that depends on that output format.

HDFS-9732 is a good example of how to handle this.  I didn't explicitly -1
it, but I did call out the CLI compatibility problem and recommend how to
change the patch so that it preserves compatibility.

Does this help address your concerns, or is the full code audit the only
thing that would convince you to lift your -1?

--Chris Nauroth




On 5/12/16, 11:55 AM, "Allen Wittenauer"
 wrote:

>
>
>-1 without an audit of every toString usage.  We have recent examples
>(e.g., HDFS-9732) where toString was used for output from CLI utilities.
>(which, IMO, should never have been done, but it¹s too late now...)
>Output of CLI utilities is most definitely covered by the compatibility
>guidelines.  Therefore, changing toString output indiscriminately may
>lead to compatibility breaks.
>
>
>
>> On May 12, 2016, at 10:23 AM, Sangjin Lee  wrote:
>> 
>> +1.
>> 
>> Thanks,
>> Sangjin
>> 
>> On Thu, May 12, 2016 at 10:05 AM, Ravi Prakash 
>>wrote:
>> 
>>> Thanks sounds reasonable Colin. +1 to not using toString() as an API
>>> 
>>> On Thu, May 12, 2016 at 9:57 AM, Chris Nauroth
>>>
>>> wrote:
>>> 
 I'm in favor of making a statement in the compatibility guidelines
that
 there is no guarantee of stability or backwards-compatibility for
 toString() output.  If a proposed patch introduces new dependencies
on a
 stable toString() output, such as for UI display or object
serialization,
 then I'd consider -1'ing it and instead asking that the logic move to
a
 different method that can provide that guarantee, i.e.
toStableString().
 There are further comments justifying this on HADOOP-13028 and
HDFS-9732.
 
 --Chris Nauroth
 
 
 
 
 On 5/12/16, 9:32 AM, "Colin McCabe"  wrote:
 
> Hi all,
> 
> Recently a discussion came up on HADOOP-13028 about the wisdom of
> overloading S3AInputStream#toString to output statistics information.
> It's a difficult judgement for me to make, since I'm not aware of any
> compatibility guidelines for InputStream#toString.  Do we have
> compatibility guidelines for toString functions?
> 
> It seems like the output of toString functions is usually used as a
> debugging aid, rather than as a stable format suitable for UI
>display or
> object serialization.  Clearly, there are a few cases where we might
> want to specifically declare that a toString method is a stable API.
> However, I think if we attempt to treat the toString output of all
> public classes as stable, we will have greatly increased the API
> surface.  Should we formalize this and declare that toString
>functions
> are @Unstable, Evolving unless declared otherwise?
> 
> best,
> Colin
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 
> 
 
 
 -
 To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
 For additional commands, e-mail: common-dev-h...@hadoop.apache.org
 
 
>>> 
>
>
>-
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Compatibility guidelines for toString overrides

2016-05-12 Thread Allen Wittenauer


-1 without an audit of every toString usage.  We have recent examples (e.g., 
HDFS-9732) where toString was used for output from CLI utilities. (which, IMO, 
should never have been done, but it’s too late now...) Output of CLI utilities 
is most definitely covered by the compatibility guidelines.  Therefore, 
changing toString output indiscriminately may lead to compatibility breaks.



> On May 12, 2016, at 10:23 AM, Sangjin Lee  wrote:
> 
> +1.
> 
> Thanks,
> Sangjin
> 
> On Thu, May 12, 2016 at 10:05 AM, Ravi Prakash  wrote:
> 
>> Thanks sounds reasonable Colin. +1 to not using toString() as an API
>> 
>> On Thu, May 12, 2016 at 9:57 AM, Chris Nauroth 
>> wrote:
>> 
>>> I'm in favor of making a statement in the compatibility guidelines that
>>> there is no guarantee of stability or backwards-compatibility for
>>> toString() output.  If a proposed patch introduces new dependencies on a
>>> stable toString() output, such as for UI display or object serialization,
>>> then I'd consider -1'ing it and instead asking that the logic move to a
>>> different method that can provide that guarantee, i.e. toStableString().
>>> There are further comments justifying this on HADOOP-13028 and HDFS-9732.
>>> 
>>> --Chris Nauroth
>>> 
>>> 
>>> 
>>> 
>>> On 5/12/16, 9:32 AM, "Colin McCabe"  wrote:
>>> 
 Hi all,
 
 Recently a discussion came up on HADOOP-13028 about the wisdom of
 overloading S3AInputStream#toString to output statistics information.
 It's a difficult judgement for me to make, since I'm not aware of any
 compatibility guidelines for InputStream#toString.  Do we have
 compatibility guidelines for toString functions?
 
 It seems like the output of toString functions is usually used as a
 debugging aid, rather than as a stable format suitable for UI display or
 object serialization.  Clearly, there are a few cases where we might
 want to specifically declare that a toString method is a stable API.
 However, I think if we attempt to treat the toString output of all
 public classes as stable, we will have greatly increased the API
 surface.  Should we formalize this and declare that toString functions
 are @Unstable, Evolving unless declared otherwise?
 
 best,
 Colin
 
 -
 To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
 For additional commands, e-mail: common-dev-h...@hadoop.apache.org
 
 
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>> 
>>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-12 Thread Li Lu
I’d like to bring YARN-4977 into attention for using Java 8. HADOOP-13083 does 
the maven change and in yarn-api there are ~5000 javadoc warnings. 

Li Lu

> On May 10, 2016, at 08:32, Akira AJISAKA  wrote:
> 
> Hi developers,
> 
> Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in trunk.
> Given this is a critical change, I'm thinking we should get the consensus 
> first.
> 
> One concern I think is, when the minimum version is set to JDK8, we need to 
> configure Jenkins to disable multi JDK test only in trunk.
> 
> Any thoughts?
> 
> Thanks,
> Akira
> 
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> 
> 



Build failed in Jenkins: Hadoop-Common-trunk #2753

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[sjlee] YARN-4577. Enable aux services to have their own custom classpath/jar

[stevel] MAPREDUCE-6639 Process hangs in LocatedFileStatusFetcher if

[wang] Update project version to 3.0.0-alpha1-SNAPSHOT.

--
[...truncated 5162 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.99 sec - in 
org.apache.hadoop.security.TestKDiagNoKDC
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.932 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.798 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.146 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.086 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.543 sec - in 
org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Running org.apache.hadoop.security.http.TestXFrameOptionsFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec - in 
org.apache.hadoop.security.http.TestXFrameOptionsFilter
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.436 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.795 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.569 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.075 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.532 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.211 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.022 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.372 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.411 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.67 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.token.TestDtUtilShell
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.462 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.546 sec - in 
org.apache.hadoop.security.token.TestToken
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.66 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.381 sec - in 
org.apache.hadoop.security.token.TestDtUtilShell
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.913 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.559 sec - in 

Re: Release numbering for 3.x leading up to GA

2016-05-12 Thread Andrew Wang
Hi folks,

I think it's working, though it takes some time for the rename to propagate
in JIRA. JIRA is also currently being hammered by spammers, which might be
related.

Anyway, the new "3.0.0-alpha1" version should be live for all four
subprojects, so have at it!

Best,
Andrew

On Thu, May 12, 2016 at 11:01 AM, Gangumalla, Uma 
wrote:

> Thanks Andrew for driving. Sounds good. Go ahead please.
>
> Good luck :-)
>
> Regards,
> Uma
>
> On 5/12/16, 10:52 AM, "Andrew Wang"  wrote:
>
> >Hi all,
> >
> >Sounds like we have general agreement on this release numbering scheme for
> >3.x.
> >
> >I'm going to attempt some mvn and JIRA invocations to get the version
> >numbers lined up for alpha1, wish me luck.
> >
> >Best,
> >Andrew
> >
> >On Tue, May 3, 2016 at 9:52 AM, Roman Shaposhnik 
> >wrote:
> >
> >> On Tue, May 3, 2016 at 8:18 AM, Karthik Kambatla 
> >> wrote:
> >> > The naming scheme sounds good. Since we want to start out sooner, I am
> >> > assuming we are not limiting ourselves to two alphas as the email
> >>might
> >> > indicate.
> >> >
> >> > Also, as the release manager, can you elaborate on your definitions of
> >> > alpha and beta? Specifically, when do we expect downstream projects to
> >> try
> >> > and integrate and when we expect Hadoop users to try out the bits?
> >>
> >> Not to speak of all the downstream PMC,s but Bigtop project will jump
> >> on the first alpha the same way we jumped on the first alpha back
> >> in the 1 -> 2 transition period.
> >>
> >> Given that Bigtop currently integrates quite a bit of Hadoop ecosystem
> >> that work is going to produce valuable feedback that we plan to
> >>communicate
> >> to the individual PMCs. What PMCs do with that feedback, of course, will
> >> be up to them (obviously Bigtop can't take the ownership of issues that
> >> go outside of integration work between projects in the Hadoop ecoystem).
> >>
> >> Thanks,
> >> Roman.
> >>
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Resolved] (HADOOP-12844) Recover when S3A fails on IOException in read()

2016-05-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12844.
-
   Resolution: Fixed
Fix Version/s: 2.8.0

> Recover when S3A fails on IOException in read()
> ---
>
> Key: HADOOP-12844
> URL: https://issues.apache.org/jira/browse/HADOOP-12844
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.1, 2.7.2
>Reporter: Pieter Reuse
>Assignee: Pieter Reuse
> Fix For: 2.8.0
>
> Attachments: HADOOP-12844.001.patch
>
>
> This simple patch catches IOExceptions in S3AInputStream.read(byte[] buf, int 
> off, int len) and reopens the connection on the same location as it was 
> before the exception.
> This is similar to the functionality introduced in S3N in 
> [HADOOP-6254|https://issues.apache.org/jira/browse/HADOOP-6254], for exactly 
> the same reason.
> Patch developed in cooperation with [~emres].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13058) S3A FS fails during init against a read-only FS if multipart purge is enabled

2016-05-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13058.
-
Resolution: Duplicate

fixed in HADOOP-13028

> S3A FS fails during init against a read-only FS if multipart purge is enabled
> -
>
> Key: HADOOP-13058
> URL: https://issues.apache.org/jira/browse/HADOOP-13058
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> If you try to open a read-only filesystem, and the multipart upload option is 
> set to purge existing uploads, then the FS fails to load with an access 
> denied exception.
> it should catch the exception, downgrade to a debug and await until a file 
> write operation for access exceptions to reject on access rights.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13047) S3a Forward seek in stream length to be configurable

2016-05-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13047.
-
Resolution: Duplicate

> S3a Forward seek in stream length to be configurable
> 
>
> Key: HADOOP-13047
> URL: https://issues.apache.org/jira/browse/HADOOP-13047
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13047.WIP.2.patch, HADOOP-13047.WIP.patch
>
>
> Even with lazy seek, tests can show that sometimes a short-distance forward 
> seek is triggering a close + reopen, because the threshold for the seek is 
> simply available bytes in the inner stream.
> A configurable threshold would allow data to be read and discarded before 
> that seek. This should be beneficial over long-haul networks as the time to 
> set up the TCP channel is high, and TCP-slow-start means that the ramp up of 
> bandwidth is slow. In such deployments, it will better to read forward than 
> re-open, though the exact "best" number will vary with client and endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Release numbering for 3.x leading up to GA

2016-05-12 Thread Gangumalla, Uma
Thanks Andrew for driving. Sounds good. Go ahead please.

Good luck :-)

Regards,
Uma

On 5/12/16, 10:52 AM, "Andrew Wang"  wrote:

>Hi all,
>
>Sounds like we have general agreement on this release numbering scheme for
>3.x.
>
>I'm going to attempt some mvn and JIRA invocations to get the version
>numbers lined up for alpha1, wish me luck.
>
>Best,
>Andrew
>
>On Tue, May 3, 2016 at 9:52 AM, Roman Shaposhnik 
>wrote:
>
>> On Tue, May 3, 2016 at 8:18 AM, Karthik Kambatla 
>> wrote:
>> > The naming scheme sounds good. Since we want to start out sooner, I am
>> > assuming we are not limiting ourselves to two alphas as the email
>>might
>> > indicate.
>> >
>> > Also, as the release manager, can you elaborate on your definitions of
>> > alpha and beta? Specifically, when do we expect downstream projects to
>> try
>> > and integrate and when we expect Hadoop users to try out the bits?
>>
>> Not to speak of all the downstream PMC,s but Bigtop project will jump
>> on the first alpha the same way we jumped on the first alpha back
>> in the 1 -> 2 transition period.
>>
>> Given that Bigtop currently integrates quite a bit of Hadoop ecosystem
>> that work is going to produce valuable feedback that we plan to
>>communicate
>> to the individual PMCs. What PMCs do with that feedback, of course, will
>> be up to them (obviously Bigtop can't take the ownership of issues that
>> go outside of integration work between projects in the Hadoop ecoystem).
>>
>> Thanks,
>> Roman.
>>


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Release numbering for 3.x leading up to GA

2016-05-12 Thread Andrew Wang
Hi all,

Sounds like we have general agreement on this release numbering scheme for
3.x.

I'm going to attempt some mvn and JIRA invocations to get the version
numbers lined up for alpha1, wish me luck.

Best,
Andrew

On Tue, May 3, 2016 at 9:52 AM, Roman Shaposhnik 
wrote:

> On Tue, May 3, 2016 at 8:18 AM, Karthik Kambatla 
> wrote:
> > The naming scheme sounds good. Since we want to start out sooner, I am
> > assuming we are not limiting ourselves to two alphas as the email might
> > indicate.
> >
> > Also, as the release manager, can you elaborate on your definitions of
> > alpha and beta? Specifically, when do we expect downstream projects to
> try
> > and integrate and when we expect Hadoop users to try out the bits?
>
> Not to speak of all the downstream PMC,s but Bigtop project will jump
> on the first alpha the same way we jumped on the first alpha back
> in the 1 -> 2 transition period.
>
> Given that Bigtop currently integrates quite a bit of Hadoop ecosystem
> that work is going to produce valuable feedback that we plan to communicate
> to the individual PMCs. What PMCs do with that feedback, of course, will
> be up to them (obviously Bigtop can't take the ownership of issues that
> go outside of integration work between projects in the Hadoop ecoystem).
>
> Thanks,
> Roman.
>


Re: Compatibility guidelines for toString overrides

2016-05-12 Thread Sangjin Lee
+1.

Thanks,
Sangjin

On Thu, May 12, 2016 at 10:05 AM, Ravi Prakash  wrote:

> Thanks sounds reasonable Colin. +1 to not using toString() as an API
>
> On Thu, May 12, 2016 at 9:57 AM, Chris Nauroth 
> wrote:
>
> > I'm in favor of making a statement in the compatibility guidelines that
> > there is no guarantee of stability or backwards-compatibility for
> > toString() output.  If a proposed patch introduces new dependencies on a
> > stable toString() output, such as for UI display or object serialization,
> > then I'd consider -1'ing it and instead asking that the logic move to a
> > different method that can provide that guarantee, i.e. toStableString().
> > There are further comments justifying this on HADOOP-13028 and HDFS-9732.
> >
> > --Chris Nauroth
> >
> >
> >
> >
> > On 5/12/16, 9:32 AM, "Colin McCabe"  wrote:
> >
> > >Hi all,
> > >
> > >Recently a discussion came up on HADOOP-13028 about the wisdom of
> > >overloading S3AInputStream#toString to output statistics information.
> > >It's a difficult judgement for me to make, since I'm not aware of any
> > >compatibility guidelines for InputStream#toString.  Do we have
> > >compatibility guidelines for toString functions?
> > >
> > >It seems like the output of toString functions is usually used as a
> > >debugging aid, rather than as a stable format suitable for UI display or
> > >object serialization.  Clearly, there are a few cases where we might
> > >want to specifically declare that a toString method is a stable API.
> > >However, I think if we attempt to treat the toString output of all
> > >public classes as stable, we will have greatly increased the API
> > >surface.  Should we formalize this and declare that toString functions
> > >are @Unstable, Evolving unless declared otherwise?
> > >
> > >best,
> > >Colin
> > >
> > >-
> > >To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > >For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
>


Build failed in Jenkins: Hadoop-Common-trunk #2752

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13116 Jets3tNativeS3FileSystemContractTest does not run.

--
[...truncated 5161 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.895 sec - in 
org.apache.hadoop.security.TestKDiagNoKDC
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.987 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.688 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.723 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.097 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.548 sec - in 
org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Running org.apache.hadoop.security.http.TestXFrameOptionsFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.387 sec - in 
org.apache.hadoop.security.http.TestXFrameOptionsFilter
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.499 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.611 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.033 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.989 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.51 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.07 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.36 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.054 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.564 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.712 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.619 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.token.TestDtUtilShell
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.656 sec - in 
org.apache.hadoop.security.token.TestToken
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.869 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.488 sec - in 
org.apache.hadoop.security.token.TestDtUtilShell
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.901 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.503 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 

Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-12 Thread Masatake Iwasaki

+1

Masatake

On 5/12/16 13:11, Gangumalla, Uma wrote:

+1

Regards,
Uma

On 5/10/16, 2:24 PM, "Andrew Wang"  wrote:


+1

On Tue, May 10, 2016 at 12:36 PM, Ravi Prakash 
wrote:


+1. Thanks for driving this Akira

On Tue, May 10, 2016 at 10:25 AM, Tsuyoshi Ozawa 
wrote:


Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in

trunk.

Sounds good. To do so, we need to check the blockers of 3.0.0-alpha
RC, especially upgrading all dependencies which use refractions at
first.

Thanks,
- Tsuyoshi

On Tue, May 10, 2016 at 8:32 AM, Akira AJISAKA
 wrote:

Hi developers,

Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in

trunk.

Given this is a critical change, I'm thinking we should get the

consensus

first.

One concern I think is, when the minimum version is set to JDK8, we

need

to

configure Jenkins to disable multi JDK test only in trunk.

Any thoughts?

Thanks,
Akira



-

To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Compatibility guidelines for toString overrides

2016-05-12 Thread Ravi Prakash
Thanks sounds reasonable Colin. +1 to not using toString() as an API

On Thu, May 12, 2016 at 9:57 AM, Chris Nauroth 
wrote:

> I'm in favor of making a statement in the compatibility guidelines that
> there is no guarantee of stability or backwards-compatibility for
> toString() output.  If a proposed patch introduces new dependencies on a
> stable toString() output, such as for UI display or object serialization,
> then I'd consider -1'ing it and instead asking that the logic move to a
> different method that can provide that guarantee, i.e. toStableString().
> There are further comments justifying this on HADOOP-13028 and HDFS-9732.
>
> --Chris Nauroth
>
>
>
>
> On 5/12/16, 9:32 AM, "Colin McCabe"  wrote:
>
> >Hi all,
> >
> >Recently a discussion came up on HADOOP-13028 about the wisdom of
> >overloading S3AInputStream#toString to output statistics information.
> >It's a difficult judgement for me to make, since I'm not aware of any
> >compatibility guidelines for InputStream#toString.  Do we have
> >compatibility guidelines for toString functions?
> >
> >It seems like the output of toString functions is usually used as a
> >debugging aid, rather than as a stable format suitable for UI display or
> >object serialization.  Clearly, there are a few cases where we might
> >want to specifically declare that a toString method is a stable API.
> >However, I think if we attempt to treat the toString output of all
> >public classes as stable, we will have greatly increased the API
> >surface.  Should we formalize this and declare that toString functions
> >are @Unstable, Evolving unless declared otherwise?
> >
> >best,
> >Colin
> >
> >-
> >To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: Compatibility guidelines for toString overrides

2016-05-12 Thread Chris Nauroth
I'm in favor of making a statement in the compatibility guidelines that
there is no guarantee of stability or backwards-compatibility for
toString() output.  If a proposed patch introduces new dependencies on a
stable toString() output, such as for UI display or object serialization,
then I'd consider -1'ing it and instead asking that the logic move to a
different method that can provide that guarantee, i.e. toStableString().
There are further comments justifying this on HADOOP-13028 and HDFS-9732.

--Chris Nauroth




On 5/12/16, 9:32 AM, "Colin McCabe"  wrote:

>Hi all,
>
>Recently a discussion came up on HADOOP-13028 about the wisdom of
>overloading S3AInputStream#toString to output statistics information.
>It's a difficult judgement for me to make, since I'm not aware of any
>compatibility guidelines for InputStream#toString.  Do we have
>compatibility guidelines for toString functions?
>
>It seems like the output of toString functions is usually used as a
>debugging aid, rather than as a stable format suitable for UI display or
>object serialization.  Clearly, there are a few cases where we might
>want to specifically declare that a toString method is a stable API.
>However, I think if we attempt to treat the toString output of all
>public classes as stable, we will have greatly increased the API
>surface.  Should we formalize this and declare that toString functions
>are @Unstable, Evolving unless declared otherwise?
>
>best,
>Colin
>
>-
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1464

2016-05-12 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Compatibility guidelines for toString overrides

2016-05-12 Thread Colin McCabe
Hi all,

Recently a discussion came up on HADOOP-13028 about the wisdom of
overloading S3AInputStream#toString to output statistics information. 
It's a difficult judgement for me to make, since I'm not aware of any
compatibility guidelines for InputStream#toString.  Do we have
compatibility guidelines for toString functions?

It seems like the output of toString functions is usually used as a
debugging aid, rather than as a stable format suitable for UI display or
object serialization.  Clearly, there are a few cases where we might
want to specifically declare that a toString method is a stable API. 
However, I think if we attempt to treat the toString output of all
public classes as stable, we will have greatly increased the API
surface.  Should we formalize this and declare that toString functions
are @Unstable, Evolving unless declared otherwise?

best,
Colin

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13078) switch hadoop-aws back to using the (heavy) amazon-sdk JAR

2016-05-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13078.
-
Resolution: Won't Fix

> switch hadoop-aws back to using the (heavy) amazon-sdk JAR
> --
>
> Key: HADOOP-13078
> URL: https://issues.apache.org/jira/browse/HADOOP-13078
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Hadoop 2.6-2.7 uses the full amazon-aws-sdk JAR. Hadoop 2.8+ has switched to 
> the amazon-s3-sdk jar because it was lighter weight. 
> I want to return to the full JAR before 2.8 switches, for
> # downstream code: if someone is already including/depending-on/upgrading the 
> aws SDK, switching to the s3 sdk complicates packaging, distribution. If 
> directly depended on via maven dependencies, it breaks the build
> # some of the 2.8+ patches, e.g. HADOOP-12537, have to add another part of 
> the S3 SDK to handle temporary credentials. This will make life even more 
> complex downstream
> # if the hadoop-aws module ever adds more stuff (e.g. a s3mper style use of 
> dynamo db for directory structure storage), then again, more JARs, more 
> complexity.
> Let's just change the build to return to the original JAR. Yes it is heavy, 
> but it will be a consistent heaviness for all projects downstream.
> This change *must* go in to 2.8 if we don't want to start breaking things



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-common-trunk-Java8 #1463

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13122 Customize User-Agent header sent in HTTP requests by S3A.

--
[...truncated 5581 lines...]
Running org.apache.hadoop.util.TestChunkedArrayList
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.798 sec - in 
org.apache.hadoop.util.TestChunkedArrayList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.238 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.103 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.662 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.288 sec - in 
org.apache.hadoop.util.TestLightWeightCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.048 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.507 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.224 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.256 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.823 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 

[jira] [Created] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients

2016-05-12 Thread Pieter Reuse (JIRA)
Pieter Reuse created HADOOP-13139:
-

 Summary: Branch-2: S3a to use thread pool that blocks clients
 Key: HADOOP-13139
 URL: https://issues.apache.org/jira/browse/HADOOP-13139
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
Reporter: Pieter Reuse
Assignee: Pieter Reuse


HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will 
attach a patch applicable to branch-2.

It should be noted in CHANGES-2.8.0.txt that the config parameter 
'fs.s3a.threads.core' has been been removed and the behavior of the ThreadPool 
for s3a has been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Jenkins build is back to normal : Hadoop-Common-trunk #2750

2016-05-12 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1462

2016-05-12 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13137) TraceAdmin should support Kerberized NameNode

2016-05-12 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-13137:


 Summary: TraceAdmin should support Kerberized NameNode
 Key: HADOOP-13137
 URL: https://issues.apache.org/jira/browse/HADOOP-13137
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0, 3.0.0
 Environment: CDH5.5.1 cluster with Kerberos
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


When I run {{hadoop trace}} command for a Kerberized NameNode, it failed with 
the following error:

[hdfs@weichiu-encryption-1 root]$ hadoop trace -list  -host 
weichiu-encryption-1.vpc.cloudera.com:802216/05/12 00:02:13 WARN ipc.Client: 
Exception encountered while connecting to the server : 
java.lang.IllegalArgumentException: Failed to specify server's Kerberos 
principal name
16/05/12 00:02:13 WARN security.UserGroupInformation: 
PriviledgedActionException as:h...@vpc.cloudera.com (auth:KERBEROS) 
cause:java.io.IOException: java.lang.IllegalArgumentException: Failed to 
specify server's Kerberos principal name
Exception in thread "main" java.io.IOException: Failed on local exception: 
java.io.IOException: java.lang.IllegalArgumentException: Failed to specify 
server's Kerberos principal name; Host Details : local host is: 
"weichiu-encryption-1.vpc.cloudera.com/172.26.8.185"; destination host is: 
"weichiu-encryption-1.vpc.cloudera.com":8022;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1470)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy11.listSpanReceivers(Unknown Source)
at 
org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:58)
at 
org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:68)
at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:177)
at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:195)
Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to 
specify server's Kerberos principal name
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:733)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
at org.apache.hadoop.ipc.Client.call(Client.java:1442)
... 7 more
Caused by: java.lang.IllegalArgumentException: Failed to specify server's 
Kerberos principal name
at 
org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322)
at 
org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
at 
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:721)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:720)
... 10 more

It is failing because {{TraceAdmin}} does not set up the property 
{{CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY}}

Fixing it may require some restructuring, as the NameNode principal 
{{dfs.namenode.kerberos.principal}} is a HDFS property, but TraceAdmin is in 
hadoop-common. Or, specify it with a new command {{-principal}}. Any 
suggestions? Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13136) shade protobuf in the hadoop-common jar

2016-05-12 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-13136:
---

 Summary: shade protobuf in the hadoop-common jar
 Key: HADOOP-13136
 URL: https://issues.apache.org/jira/browse/HADOOP-13136
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai


While Protobuf has good wire compatibility, its implementation has been changed 
from time to time. It might be a good idea to shade it in for better 
compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org