[jira] [Created] (HADOOP-12569) ZKFC should stop namenode before itself quit in some circumstances

2015-11-13 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-12569:


 Summary: ZKFC should stop namenode before itself quit in some 
circumstances
 Key: HADOOP-12569
 URL: https://issues.apache.org/jira/browse/HADOOP-12569
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.6.0
Reporter: Tao Jie


We have met such a HA scenario:
NN1(active) and zkfc1 on node1;
NN2(standby) and zkfc2 on node2.
1,Stop network on node1, NN2 becomes active. On node2, zkfc2 kills itself since 
it cannot connect to zookeeper, but leaving NN1 still running.
2,Several minutes later, network on node1 recovers. NN1 is running but out of 
control. NN1 and NN2 both run as active nn.
Maybe zkfc should stop nn before quit in such circumstances.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Release votes and git-tags [Was Re: [VOTE] Release Apache Hadoop 2.7.2 RC0]

2015-11-13 Thread Steve Loughran

> On 12 Nov 2015, at 20:23, Vinod Kumar Vavilapalli  wrote:
> 
> We have always voted on release tar-balls, not svn branches / git commit-ids 
> or tags.
> 
> When we were on SVN, we used to paste in the voting thread the release branch 
> URL.
> 
> Since we moved to git, we stopped creating release branches and have always 
> used signed tags for snapshotting and posted tags in the voting threads.
> 
> To my knowledge, we never reuse tags - as they are themselves versioned - for 
> e.g. hadoop-2.7.1-RC0 etc. So we don’t run the risk of tags getting replaced 
> from under the rug.
> 
> To me, tags are a simple way of going back to the code we ship in a release 
> without creating and maintaining explicit release branches. No one can 
> remember Commit IDs.
> 
> All that said, we can post the commit-IDs in future release votes for the 
> sake of convenience, but I disagree with the statement that we vote on 
> git-commits.
> 
> +Vinod
> 

I recognise that we vote on the src distro, but that source has an origin. And 
that has to be a commit #, not a tag, as somebody *may* change that tag later.

the ASF incubator will only approve of git-based releases with that checksum 
—so its the one we should all be using

FWIW, the RC tag is: 6f38ccc ; I've checked it out and verifying it builds on 
Windows, including all the native libs

Re: [VOTE] Release Apache Hadoop 2.7.2 RC0

2015-11-13 Thread Jason Lowe
-1 (binding)
Ran into public localization issues and filed YARN-4354. We need that resolved 
before the release is ready.  We will either need a timely fix or may have to 
revert YARN-2902 to unblock the release if my root-cause analysis is correct.  
I'll dig into this more today.

Jason

  From: Vinod Kumar Vavilapalli 
 To: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
Cc: vino...@apache.org 
 Sent: Wednesday, November 11, 2015 10:31 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.2 RC0
   
Hi all,


I've created a release candidate RC0 for Apache Hadoop 2.7.2.


As discussed before, this is the next maintenance release to follow up
2.7.1.


The RC is available for validation at:

*http://people.apache.org/~vinodkv/hadoop-2.7.2-RC0/

*


The RC tag in git is: release-2.7.2-RC0


The maven artifacts are available via repository.apache.org at

*https://repository.apache.org/content/repositories/orgapachehadoop-1023/

*


As you may have noted, an unusually long 2.6.3 release caused 2.7.2 to slip
by quite a bit. This release's related discussion threads are linked below:
[1] and [2].


Please try the release and vote; the vote will run for the usual 5 days.


Thanks,

Vinod


[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes

[2]: Planning Apache Hadoop 2.7.2
http://markmail.org/message/iktqss2qdeykgpqk


  

Re: [VOTE] Release Apache Hadoop 2.7.2 RC0

2015-11-13 Thread Sunil Govind
+1 (non-binding)

- Build the tar ball from source and deployed.
- Ran few MR jobs successfully along with some basic node label and
preemption verification.
- Verified RM Web UI, AM UI and Timeline UI. All pages are looking fine.

Thanks and Regards
Sunil

On Thu, Nov 12, 2015 at 10:01 AM Vinod Kumar Vavilapalli 
wrote:

> Hi all,
>
>
> I've created a release candidate RC0 for Apache Hadoop 2.7.2.
>
>
> As discussed before, this is the next maintenance release to follow up
> 2.7.1.
>
>
> The RC is available for validation at:
>
> *http://people.apache.org/~vinodkv/hadoop-2.7.2-RC0/
>
> *
>
>
> The RC tag in git is: release-2.7.2-RC0
>
>
> The maven artifacts are available via repository.apache.org at
>
> *https://repository.apache.org/content/repositories/orgapachehadoop-1023/
>
>  >*
>
>
> As you may have noted, an unusually long 2.6.3 release caused 2.7.2 to slip
> by quite a bit. This release's related discussion threads are linked below:
> [1] and [2].
>
>
> Please try the release and vote; the vote will run for the usual 5 days.
>
>
> Thanks,
>
> Vinod
>
>
> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes
>
> [2]: Planning Apache Hadoop 2.7.2
> http://markmail.org/message/iktqss2qdeykgpqk
>


Re: Github integration for Hadoop

2015-11-13 Thread Allen Wittenauer

> On Nov 12, 2015, at 10:55 AM, Colin P. McCabe  wrote:
> 
> gerrit has a button on the UI to cherry-pick to different branches.
> The button creates separate "gerrit changes" which you can then
> commit.  Eventually we could hook those up to Jenkins-- something
> which we've never been able to do for different branches with the
> patch-file-based workflow.


If you’re saying what I think you’re saying, people have been able to 
submit patches via JIRA patch file attachment to major branches for a few 
months now. Yetus closes the loop and supports pretty much any branch or git 
hash.  (Github PRs also go to their respective branch or git hash as well.)

Re: [VOTE] Release Apache Hadoop 2.7.2 RC0

2015-11-13 Thread Vinod Kumar Vavilapalli
Thanks for reporting this Jason!

Everyone, I am canceling this RC given the feedback, we will go again after 
addressing the open issues.

Thanks
+Vinod

> On Nov 13, 2015, at 7:57 AM, Jason Lowe  wrote:
> 
> -1 (binding)
> 
> Ran into public localization issues and filed YARN-4354 
> . We need that resolved 
> before the release is ready.  We will either need a timely fix or may have to 
> revert YARN-2902 to unblock the release if my root-cause analysis is correct. 
>  I'll dig into this more today.
> 
> Jason
> 
> From: Vinod Kumar Vavilapalli 
> To: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
> Cc: vino...@apache.org 
> Sent: Wednesday, November 11, 2015 10:31 PM
> Subject: [VOTE] Release Apache Hadoop 2.7.2 RC0
> 
> Hi all,
> 
> 
> I've created a release candidate RC0 for Apache Hadoop 2.7.2.
> 
> 
> As discussed before, this is the next maintenance release to follow up
> 2.7.1.
> 
> 
> The RC is available for validation at:
> 
> *http://people.apache.org/~vinodkv/hadoop-2.7.2-RC0/ 
> 
> 
>  >*
> 
> 
> The RC tag in git is: release-2.7.2-RC0
> 
> 
> The maven artifacts are available via repository.apache.org at
> 
> *https://repository.apache.org/content/repositories/orgapachehadoop-1023/ 
> 
> 
>  >*
> 
> 
> As you may have noted, an unusually long 2.6.3 release caused 2.7.2 to slip
> by quite a bit. This release's related discussion threads are linked below:
> [1] and [2].
> 
> 
> Please try the release and vote; the vote will run for the usual 5 days.
> 
> 
> Thanks,
> 
> Vinod
> 
> 
> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 
> 
> 
> [2]: Planning Apache Hadoop 2.7.2
> http://markmail.org/message/iktqss2qdeykgpqk 
> 
> 
> 



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-13 Thread Sangjin Lee
I reviewed the current state of the YARN-2928 changes regarding its impact
if the timeline service v.2 is disabled. It does appear that there are a
lot of things that still do get created and enabled unconditionally
regardless of configuration. While this is understandable when we were
working to implement the feature, this clearly needs to be cleaned up so
that when disabled the timeline service v.2 doesn't impact other things.

I filed a JIRA for that work:
https://issues.apache.org/jira/browse/YARN-4356

We need to complete it before we can merge.

Somewhat related is the status of the configuration and what it means in
various contexts (client/app-side vs. server-side, v.1 vs. v.2, etc.). I
know there is an ongoing discussion regarding YARN-4183. We'll need to
reflect the outcome of that discussion.

My overall impression of whether this can be done for 2.8 is that it looks
rather challenging given the suggested timeframe. We also need to complete
several major tasks before it is ready.

Sangjin


On Wed, Nov 11, 2015 at 5:49 PM, Sangjin Lee  wrote:

>
> On Wed, Nov 11, 2015 at 12:13 PM, Vinod Vavilapalli <
> vino...@hortonworks.com> wrote:
>
>> — YARN Timeline Service Next generation: YARN-2928: Lots of momentum,
>> but clearly a work in progress. Two options here
>> — If it is safe to ship it into 2.8 in a disable manner, we can
>> get the early code into trunk and all the way int o2.8.
>> — If it is not safe, it organically rolls over into 2.9
>>
>
> I'll review the changes on YARN-2928 to see what impact it has (if any) if
> the timeline service v.2 is disabled.
>
> Another condition for it to make 2.8 is whether the branch will be in a
> shape in a couple of weeks such that it adds value for folks that want to
> test it. Hopefully it will become clearer soon.
>
> Sangjin
>


Re: Github integration for Hadoop

2015-11-13 Thread Sean Busbey
Hi Colin!

If Yetus is working on an issue and can't tell what the intended branch is
it points folks to project specific contribution guides.

For Hadoop, the patch naming for specific branches should be covered in
this section of Hadoop's contribution guide:

http://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch

Yetus will actually support a little bit more than that guide suggests. If
a project doesn't define a URL to point people at for help in naming
patches we default to this guide:

https://yetus.apache.org/documentation/latest/precommit-patchnames/



On Fri, Nov 13, 2015 at 8:05 PM, Colin P. McCabe  wrote:

> Thanks, Allen, I wasn't aware that Yetus now supported testing for
> other branches.  Is there documentation about how to name the branch
> so it gets tested?
>
> best,
> Colin
>
> On Fri, Nov 13, 2015 at 7:52 AM, Allen Wittenauer 
> wrote:
> >
> >> On Nov 12, 2015, at 10:55 AM, Colin P. McCabe 
> wrote:
> >>
> >> gerrit has a button on the UI to cherry-pick to different branches.
> >> The button creates separate "gerrit changes" which you can then
> >> commit.  Eventually we could hook those up to Jenkins-- something
> >> which we've never been able to do for different branches with the
> >> patch-file-based workflow.
> >
> >
> > If you’re saying what I think you’re saying, people have been
> able to submit patches via JIRA patch file attachment to major branches for
> a few months now. Yetus closes the loop and supports pretty much any branch
> or git hash.  (Github PRs also go to their respective branch or git hash as
> well.)
>



-- 
Sean


Re: Github integration for Hadoop

2015-11-13 Thread Colin P. McCabe
Thanks, Allen, I wasn't aware that Yetus now supported testing for
other branches.  Is there documentation about how to name the branch
so it gets tested?

best,
Colin

On Fri, Nov 13, 2015 at 7:52 AM, Allen Wittenauer  wrote:
>
>> On Nov 12, 2015, at 10:55 AM, Colin P. McCabe  wrote:
>>
>> gerrit has a button on the UI to cherry-pick to different branches.
>> The button creates separate "gerrit changes" which you can then
>> commit.  Eventually we could hook those up to Jenkins-- something
>> which we've never been able to do for different branches with the
>> patch-file-based workflow.
>
>
> If you’re saying what I think you’re saying, people have been able to 
> submit patches via JIRA patch file attachment to major branches for a few 
> months now. Yetus closes the loop and supports pretty much any branch or git 
> hash.  (Github PRs also go to their respective branch or git hash as well.)


Build failed in Jenkins: Hadoop-Common-trunk #1991

2015-11-13 Thread Apache Jenkins Server
See 

Changes:

[ozawa] Move HADOOP-11361, HADOOP-12348 and HADOOP-12482 from 2.8.0 to 2.7.3 in

--
[...truncated 5417 lines...]
Running org.apache.hadoop.fs.permission.TestAcl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec - in 
org.apache.hadoop.fs.permission.TestAcl
Running org.apache.hadoop.fs.permission.TestFsPermission
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.875 sec - in 
org.apache.hadoop.fs.permission.TestFsPermission
Running org.apache.hadoop.fs.sftp.TestSFTPFileSystem
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.362 sec - in 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem
Running org.apache.hadoop.fs.TestFileContextDeleteOnExit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.054 sec - in 
org.apache.hadoop.fs.TestFileContextDeleteOnExit
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.234 sec - in 
org.apache.hadoop.fs.TestFcLocalFsUtil
Running org.apache.hadoop.fs.TestStat
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.075 sec - in 
org.apache.hadoop.fs.TestStat
Running org.apache.hadoop.fs.shell.TestCopy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.381 sec - in 
org.apache.hadoop.fs.shell.TestCopy
Running org.apache.hadoop.fs.shell.TestTextCommand
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.653 sec - in 
org.apache.hadoop.fs.shell.TestTextCommand
Running org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.428 sec - in 
org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Running org.apache.hadoop.fs.shell.TestAclCommands
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.314 sec - in 
org.apache.hadoop.fs.shell.TestAclCommands
Running org.apache.hadoop.fs.shell.find.TestFind
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.649 sec - in 
org.apache.hadoop.fs.shell.find.TestFind
Running org.apache.hadoop.fs.shell.find.TestPrint0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.351 sec - in 
org.apache.hadoop.fs.shell.find.TestPrint0
Running org.apache.hadoop.fs.shell.find.TestPrint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.375 sec - in 
org.apache.hadoop.fs.shell.find.TestPrint
Running org.apache.hadoop.fs.shell.find.TestAnd
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.46 sec - in 
org.apache.hadoop.fs.shell.find.TestAnd
Running org.apache.hadoop.fs.shell.find.TestResult
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec - in 
org.apache.hadoop.fs.shell.find.TestResult
Running org.apache.hadoop.fs.shell.find.TestIname
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.561 sec - in 
org.apache.hadoop.fs.shell.find.TestIname
Running org.apache.hadoop.fs.shell.find.TestName
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.469 sec - in 
org.apache.hadoop.fs.shell.find.TestName
Running org.apache.hadoop.fs.shell.find.TestFilterExpression
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.511 sec - in 
org.apache.hadoop.fs.shell.find.TestFilterExpression
Running org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.1 sec - in 
org.apache.hadoop.fs.shell.TestPathExceptions
Running org.apache.hadoop.fs.shell.TestCommandFactory
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec - in 
org.apache.hadoop.fs.shell.TestCommandFactory
Running org.apache.hadoop.fs.shell.TestLs
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.035 sec - in 
org.apache.hadoop.fs.shell.TestLs
Running org.apache.hadoop.fs.shell.TestMove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.489 sec - in 
org.apache.hadoop.fs.shell.TestMove
Running org.apache.hadoop.fs.shell.TestXAttrCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.718 sec - in 
org.apache.hadoop.fs.shell.TestXAttrCommands
Running org.apache.hadoop.fs.shell.TestPathData
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.613 sec - in 
org.apache.hadoop.fs.shell.TestPathData
Running org.apache.hadoop.fs.shell.TestCount
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.319 sec - in 
org.apache.hadoop.fs.shell.TestCount
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractRename
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.358 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractRename
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 1.227 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractAppend
Running 

[jira] [Created] (HADOOP-12571) [JDK8] Remove XX:MaxPermSize setting from pom.xml

2015-11-13 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-12571:
--

 Summary: [JDK8] Remove XX:MaxPermSize setting from pom.xml
 Key: HADOOP-12571
 URL: https://issues.apache.org/jira/browse/HADOOP-12571
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira AJISAKA
Priority: Minor


{code:title=hadoop-project/pom.xml}
-Xmx2048m -XX:MaxPermSize=768m 
-XX:+HeapDumpOnOutOfMemoryError
{code}
{{-XX:MaxPermSize}} is not supported in JDK8. It should be removed after 
dropping support of JDK7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #692

2015-11-13 Thread Apache Jenkins Server
See 

Changes:

[ozawa] Move HADOOP-11361, HADOOP-12348 and HADOOP-12482 from 2.8.0 to 2.7.3 in

--
[...truncated 5862 lines...]
Running org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.096 sec - in 
org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.777 sec - in 
org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFcLocalFsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.964 sec - in 
org.apache.hadoop.fs.TestFcLocalFsPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestAfsCheckPath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec - in 
org.apache.hadoop.fs.TestAfsCheckPath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFileUtil
Tests run: 27, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.788 sec - in 
org.apache.hadoop.fs.TestFileUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestGlobPattern
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.fs.TestGlobPattern
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestDFVariations
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.258 sec - in 
org.apache.hadoop.fs.TestDFVariations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestBlockLocation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec - in 
org.apache.hadoop.fs.TestBlockLocation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestCopy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.204 sec - in 
org.apache.hadoop.fs.shell.TestCopy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.08 sec - in 
org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.104 sec - in 
org.apache.hadoop.fs.shell.TestPathExceptions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestAclCommands
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.984 sec - in 
org.apache.hadoop.fs.shell.TestAclCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestLs
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.444 sec - in 
org.apache.hadoop.fs.shell.TestLs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestTextCommand
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.251 sec - in 
org.apache.hadoop.fs.shell.TestTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestMove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.016 sec - in 
org.apache.hadoop.fs.shell.TestMove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestPathData
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.167 sec - in 
org.apache.hadoop.fs.shell.TestPathData
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestCommandFactory
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.228 sec - in 
org.apache.hadoop.fs.shell.TestCommandFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestCount

[jira] [Resolved] (HADOOP-12571) [JDK8] Remove XX:MaxPermSize setting from pom.xml

2015-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-12571.

Resolution: Duplicate

Closing this.

> [JDK8] Remove XX:MaxPermSize setting from pom.xml
> -
>
> Key: HADOOP-12571
> URL: https://issues.apache.org/jira/browse/HADOOP-12571
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Minor
>
> {code:title=hadoop-project/pom.xml}
> -Xmx2048m -XX:MaxPermSize=768m 
> -XX:+HeapDumpOnOutOfMemoryError
> {code}
> {{-XX:MaxPermSize}} is not supported in JDK8. It should be removed after 
> dropping support of JDK7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)