[ANNOUNCE] Apache Hadoop 2.8.3 Release

2017-12-18 Thread 俊平堵
Hi all,

I am pleased to announce that the Apache Hadoop community has voted to
release Apache Hadoop 2.8.3!

Apache Hadoop 2.8.3 is the next GA release of Apache Hadoop 2.8 line, which
include
79 fixes identified for previous Hadoop 2.8.2 releases:
 - For major changes included in Hadoop 2.8 line, please refer Hadoop 2.8.3
main page [1].
 - For more details about fixes in 2.8.3 release, please read the log of
CHANGES [2] and RELEASENOTES [3].

The release news is posted on the Hadoop website too, you can go to the
downloads section directly [4].

Thank you all for contributing to the Apache Hadoop!


Cheers,


Junping

[1] http://hadoop.apache.org/docs/r2.8.3/index.html

[2]
http://hadoop.apache.org/docs/r2.8.3/hadoop-project-dist/hadoop-common/release/2.8.3/CHANGES.2.8.3.html

[3]
http://hadoop.apache.org/docs/r2.8.3/hadoop-project-dist/hadoop-common/release/2.8.3/RELEASENOTES.2.8.3.html

[4] http://hadoop.apache.org/releases.html#Download


Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-18 Thread Andrew Wang
Thanks for the spot, I just pushed a correct tag. I can't delete the bad
tag myself, will ask ASF infra for help.

On Mon, Dec 18, 2017 at 4:46 PM, Jonathan Kelly 
wrote:

> Congrats on the huge release!
>
> I just noticed, though, that the Github repo does not appear to have the
> correct tag for 3.0.0. I see a new tag called "rel/release-" that points to
> the same commit as "release-3.0.0-RC1" 
> (c25427ceca461ee979d30edd7a4b0f50718e6533).
> I assume that should have actually been called "rel/release-3.0.0" to match
> the pattern for prior releases.
>
> Thanks,
> Jonathan Kelly
>
> On Thu, Dec 14, 2017 at 10:45 AM Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
>> (GA).
>>
>> 3.0.0 GA consists of 302 bug fixes, improvements, and other enhancements
>> since 3.0.0-beta1. This release marks a point of quality and stability for
>> the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta
>> releases
>> are encouraged to upgrade.
>>
>> Looking back, 3.0.0 GA is the culmination of over a year of work on the
>> 3.0.0 line, starting with 3.0.0-alpha1 which was released in September
>> 2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.
>>
>> Users are encouraged to read the overview of major changes
>>  in 3.0.0. The GA
>> release
>> notes
>> > dist/hadoop-common/release/3.0.0/RELEASENOTES.3.0.0.html>
>>  and changelog
>> > dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html>
>> detail
>> the changes since 3.0.0-beta1.
>>
>> The ASF press release provides additional color and highlights some of the
>> major features:
>>
>> https://globenewswire.com/news-release/2017/12/14/
>> 1261879/0/en/The-Apache-Software-Foundation-Announces-
>> Apache-Hadoop-v3-0-0-General-Availability.html
>>
>> Let me end by thanking the many, many contributors who helped with this
>> release line. We've only had three major releases in Hadoop's 10 year
>> history, and this is our biggest major release ever. It's an incredible
>> accomplishment for our community, and I'm proud to have worked with all of
>> you.
>>
>> Best,
>> Andrew
>>
>


Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-18 Thread Jonathan Kelly
Congrats on the huge release!

I just noticed, though, that the Github repo does not appear to have the
correct tag for 3.0.0. I see a new tag called "rel/release-" that points to
the same commit as "release-3.0.0-RC1"
(c25427ceca461ee979d30edd7a4b0f50718e6533). I assume that should have
actually been called "rel/release-3.0.0" to match the pattern for prior
releases.

Thanks,
Jonathan Kelly

On Thu, Dec 14, 2017 at 10:45 AM Andrew Wang 
wrote:

> Hi all,
>
> I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
> (GA).
>
> 3.0.0 GA consists of 302 bug fixes, improvements, and other enhancements
> since 3.0.0-beta1. This release marks a point of quality and stability for
> the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta releases
> are encouraged to upgrade.
>
> Looking back, 3.0.0 GA is the culmination of over a year of work on the
> 3.0.0 line, starting with 3.0.0-alpha1 which was released in September
> 2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.
>
> Users are encouraged to read the overview of major changes
>  in 3.0.0. The GA release
> notes
> <
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/RELEASENOTES.3.0.0.html
> >
>  and changelog
> <
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html
> >
> detail
> the changes since 3.0.0-beta1.
>
> The ASF press release provides additional color and highlights some of the
> major features:
>
>
> https://globenewswire.com/news-release/2017/12/14/1261879/0/en/The-Apache-Software-Foundation-Announces-Apache-Hadoop-v3-0-0-General-Availability.html
>
> Let me end by thanking the many, many contributors who helped with this
> release line. We've only had three major releases in Hadoop's 10 year
> history, and this is our biggest major release ever. It's an incredible
> accomplishment for our community, and I'm proud to have worked with all of
> you.
>
> Best,
> Andrew
>


Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-18 Thread Allen Wittenauer

It’s significantly more concerning that 3.0.0-beta1 doesn’t show up here:

http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/index.html

It looks like they are missing from the source tag too.  I wonder what else is 
missing.


> On Dec 18, 2017, at 11:15 AM, Andrew Wang  wrote:
> 
> Moving general@ to BCC,
> 
> The main page and releases posts on hadoop.apache.org are pretty clear
> about this being a diff from beta1, am I missing something? Pasted below:
> 
> After four alpha releases and one beta release, 3.0.0 is generally
> available. 3.0.0 consists of 302 bug fixes, improvements, and other
> enhancements since 3.0.0-beta1. All together, 6242 issues were fixed as
> part of the 3.0.0 release series since 2.7.0.
> 
> Users are encouraged to read the overview of major changes
>  in 3.0.0. The GA release
> notes
> 
> and changelog
> 
> detail
> the changes since 3.0.0-beta1.
> 
> 
> 
> On Mon, Dec 18, 2017 at 10:32 AM, Arpit Agarwal 
> wrote:
> 
>> That makes sense for Beta users but most of our users will be upgrading
>> from a previous GA release and the changelog will mislead them. The webpage
>> does not mention this is a delta from the beta release.
>> 
>> 
>> 
>> 
>> 
>> *From: *Andrew Wang 
>> *Date: *Friday, December 15, 2017 at 10:36 AM
>> *To: *Arpit Agarwal 
>> *Cc: *general , "common-...@hadoop.apache.org"
>> , "yarn-...@hadoop.apache.org" <
>> yarn-...@hadoop.apache.org>, "mapreduce-...@hadoop.apache.org" <
>> mapreduce-...@hadoop.apache.org>, "hdfs-dev@hadoop.apache.org" <
>> hdfs-dev@hadoop.apache.org>
>> *Subject: *Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released
>> 
>> 
>> 
>> Hi Arpit,
>> 
>> 
>> 
>> If you look at the release announcements, it's made clear that the
>> changelog for 3.0.0 is diffed based on beta1. This is important since users
>> need to know what's different from the previous 3.0.0-* releases if they're
>> upgrading.
>> 
>> 
>> 
>> I agree there's additional value to making combined release notes, but
>> it'd be something additive rather than replacing what's there.
>> 
>> 
>> 
>> Best,
>> 
>> Andrew
>> 
>> 
>> 
>> On Fri, Dec 15, 2017 at 8:27 AM, Arpit Agarwal 
>> wrote:
>> 
>> 
>> Hi Andrew,
>> 
>> Thank you for all the hard work on this release. I was out the last few
>> days and didn’t get a chance to evaluate RC1 earlier.
>> 
>> The changelog looks incorrect. E.g. This gives an impression that there
>> are just 5 incompatible changes in 3.0.0.
>> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/
>> hadoop-common/release/3.0.0/CHANGES.3.0.0.html
>> 
>> I assume you only counted 3.0.0 changes in this log excluding
>> alphas/betas. However, users shouldn’t have to manually compile
>> incompatibilities by summing up a/b release notes. Can we fix the changelog
>> after the fact?
>> 
>> 
>> 
>> 
>> On 12/14/17, 10:45 AM, "Andrew Wang"  wrote:
>> 
>>Hi all,
>> 
>>I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
>>(GA).
>> 
>>3.0.0 GA consists of 302 bug fixes, improvements, and other
>> enhancements
>>since 3.0.0-beta1. This release marks a point of quality and stability
>> for
>>the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta
>> releases
>>are encouraged to upgrade.
>> 
>>Looking back, 3.0.0 GA is the culmination of over a year of work on the
>>3.0.0 line, starting with 3.0.0-alpha1 which was released in September
>>2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.
>> 
>>Users are encouraged to read the overview of major changes
>> in 3.0.0. The GA
>> release
>>notes
>>> dist/hadoop-common/release/3.0.0/RELEASENOTES.3.0.0.html>
>> and changelog
>>> dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html>
>> 
>>detail
>>the changes since 3.0.0-beta1.
>> 
>>The ASF press release provides additional color and highlights some of
>> the
>>major features:
>> 
>>https://globenewswire.com/news-release/2017/12/14/
>> 1261879/0/en/The-Apache-Software-Foundation-Announces-
>> Apache-Hadoop-v3-0-0-General-Availability.html
>> 
>>Let me end by thanking the many, many contributors who helped with this
>>release line. We've only had three major releases in Hadoop's 10 year
>>history, and this is our biggest major release ever. It's an incredible
>>accomplishment for our 

Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-18 Thread Andrew Wang
Moving general@ to BCC,

The main page and releases posts on hadoop.apache.org are pretty clear
about this being a diff from beta1, am I missing something? Pasted below:

After four alpha releases and one beta release, 3.0.0 is generally
available. 3.0.0 consists of 302 bug fixes, improvements, and other
enhancements since 3.0.0-beta1. All together, 6242 issues were fixed as
part of the 3.0.0 release series since 2.7.0.

Users are encouraged to read the overview of major changes
 in 3.0.0. The GA release
notes

 and changelog

detail
the changes since 3.0.0-beta1.



On Mon, Dec 18, 2017 at 10:32 AM, Arpit Agarwal 
wrote:

> That makes sense for Beta users but most of our users will be upgrading
> from a previous GA release and the changelog will mislead them. The webpage
> does not mention this is a delta from the beta release.
>
>
>
>
>
> *From: *Andrew Wang 
> *Date: *Friday, December 15, 2017 at 10:36 AM
> *To: *Arpit Agarwal 
> *Cc: *general , "common-...@hadoop.apache.org"
> , "yarn-...@hadoop.apache.org" <
> yarn-...@hadoop.apache.org>, "mapreduce-...@hadoop.apache.org" <
> mapreduce-...@hadoop.apache.org>, "hdfs-dev@hadoop.apache.org" <
> hdfs-dev@hadoop.apache.org>
> *Subject: *Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released
>
>
>
> Hi Arpit,
>
>
>
> If you look at the release announcements, it's made clear that the
> changelog for 3.0.0 is diffed based on beta1. This is important since users
> need to know what's different from the previous 3.0.0-* releases if they're
> upgrading.
>
>
>
> I agree there's additional value to making combined release notes, but
> it'd be something additive rather than replacing what's there.
>
>
>
> Best,
>
> Andrew
>
>
>
> On Fri, Dec 15, 2017 at 8:27 AM, Arpit Agarwal 
> wrote:
>
>
> Hi Andrew,
>
> Thank you for all the hard work on this release. I was out the last few
> days and didn’t get a chance to evaluate RC1 earlier.
>
> The changelog looks incorrect. E.g. This gives an impression that there
> are just 5 incompatible changes in 3.0.0.
> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/
> hadoop-common/release/3.0.0/CHANGES.3.0.0.html
>
> I assume you only counted 3.0.0 changes in this log excluding
> alphas/betas. However, users shouldn’t have to manually compile
> incompatibilities by summing up a/b release notes. Can we fix the changelog
> after the fact?
>
>
>
>
> On 12/14/17, 10:45 AM, "Andrew Wang"  wrote:
>
> Hi all,
>
> I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
> (GA).
>
> 3.0.0 GA consists of 302 bug fixes, improvements, and other
> enhancements
> since 3.0.0-beta1. This release marks a point of quality and stability
> for
> the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta
> releases
> are encouraged to upgrade.
>
> Looking back, 3.0.0 GA is the culmination of over a year of work on the
> 3.0.0 line, starting with 3.0.0-alpha1 which was released in September
> 2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.
>
> Users are encouraged to read the overview of major changes
>  in 3.0.0. The GA
> release
> notes
>  dist/hadoop-common/release/3.0.0/RELEASENOTES.3.0.0.html>
>  and changelog
>  dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html>
>
> detail
> the changes since 3.0.0-beta1.
>
> The ASF press release provides additional color and highlights some of
> the
> major features:
>
> https://globenewswire.com/news-release/2017/12/14/
> 1261879/0/en/The-Apache-Software-Foundation-Announces-
> Apache-Hadoop-v3-0-0-General-Availability.html
>
> Let me end by thanking the many, many contributors who helped with this
> release line. We've only had three major releases in Hadoop's 10 year
> history, and this is our biggest major release ever. It's an incredible
> accomplishment for our community, and I'm proud to have worked with
> all of
> you.
>
> Best,
> Andrew
>
>
>
>
>
>
>
>


[jira] [Resolved] (HDFS-12936) java.lang.OutOfMemoryError: unable to create new native thread

2017-12-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-12936.
-
Resolution: Not A Bug

> java.lang.OutOfMemoryError: unable to create new native thread
> --
>
> Key: HDFS-12936
> URL: https://issues.apache.org/jira/browse/HDFS-12936
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
> Environment: CDH5.12
> hadoop2.6
>Reporter: Jepson
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> I configure the max user processes  65535 with any user ,and the datanode 
> memory is 8G.
> When a log of data was been writeen,the datanode was been shutdown.
> But I can see the memory use only < 1000M.
> Please to see https://pan.baidu.com/s/1o7BE0cy
> *DataNode shutdown error log:*  
> {code:java}
> 2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2017-12-17 23:58:31,425 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:01,426 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:05,520 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 
> src: /192.168.17.54:40478 dest: /192.168.17.48:50010
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-18 Thread Arpit Agarwal
That makes sense for Beta users but most of our users will be upgrading from a 
previous GA release and the changelog will mislead them. The webpage does not 
mention this is a delta from the beta release.


From: Andrew Wang 
Date: Friday, December 15, 2017 at 10:36 AM
To: Arpit Agarwal 
Cc: general , "common-...@hadoop.apache.org" 
, "yarn-...@hadoop.apache.org" 
, "mapreduce-...@hadoop.apache.org" 
, "hdfs-dev@hadoop.apache.org" 

Subject: Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

Hi Arpit,

If you look at the release announcements, it's made clear that the changelog 
for 3.0.0 is diffed based on beta1. This is important since users need to know 
what's different from the previous 3.0.0-* releases if they're upgrading.

I agree there's additional value to making combined release notes, but it'd be 
something additive rather than replacing what's there.

Best,
Andrew

On Fri, Dec 15, 2017 at 8:27 AM, Arpit Agarwal 
> wrote:

Hi Andrew,

Thank you for all the hard work on this release. I was out the last few days 
and didn’t get a chance to evaluate RC1 earlier.

The changelog looks incorrect. E.g. This gives an impression that there are 
just 5 incompatible changes in 3.0.0.
http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html

I assume you only counted 3.0.0 changes in this log excluding alphas/betas. 
However, users shouldn’t have to manually compile incompatibilities by summing 
up a/b release notes. Can we fix the changelog after the fact?




On 12/14/17, 10:45 AM, "Andrew Wang" 
> wrote:

Hi all,

I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
(GA).

3.0.0 GA consists of 302 bug fixes, improvements, and other enhancements
since 3.0.0-beta1. This release marks a point of quality and stability for
the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta releases
are encouraged to upgrade.

Looking back, 3.0.0 GA is the culmination of over a year of work on the
3.0.0 line, starting with 3.0.0-alpha1 which was released in September
2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.

Users are encouraged to read the overview of major changes
 in 3.0.0. The GA release
notes


 and changelog


detail
the changes since 3.0.0-beta1.

The ASF press release provides additional color and highlights some of the
major features:


https://globenewswire.com/news-release/2017/12/14/1261879/0/en/The-Apache-Software-Foundation-Announces-Apache-Hadoop-v3-0-0-General-Availability.html

Let me end by thanking the many, many contributors who helped with this
release line. We've only had three major releases in Hadoop's 10 year
history, and this is our biggest major release ever. It's an incredible
accomplishment for our community, and I'm proud to have worked with all of
you.

Best,
Andrew









Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/626/

[Dec 18, 2017 2:07:16 AM] (wwei) YARN-7617. Add a flag in distributed shell to 
automatically PROMOTE
[Dec 18, 2017 1:24:51 PM] (aajisaka) YARN-7664. Several javadoc errors. 
Contributed by Sean Mackrory.

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-12938) TestErasureCodigCLI test failing consistently.

2017-12-18 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-12938:
-

 Summary: TestErasureCodigCLI test failing consistently.
 Key: HDFS-12938
 URL: https://issues.apache.org/jira/browse/HDFS-12938
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding, hdfs
Affects Versions: 3.1.0
Reporter: Rushabh S Shah


{{TestErasureCodingCLI#testAll}} is failing consistently.
It failed in this precommit: 
https://builds.apache.org/job/PreCommit-HDFS-Build/22435/testReport/org.apache.hadoop.cli/TestErasureCodingCLI/testAll/
I ran locally on my laptop and it failed too.
Below is the detailed report from 
{{org.apache.hadoop.cli.TestErasureCodingCLI-output.txt}}.
{noformat}
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(156)) - 
---
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(157)) - Test ID: [15]
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(158)) -Test Description: 
[setPolicy : set policy on non-empty directory]
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(159)) - 
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
hdfs://localhost:52345 -mkdir /ecdir]
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
hdfs://localhost:52345 -touchz /ecdir/file1]
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
hdfs://localhost:52345 -setPolicy -policy RS-6-3-1024k -path /ecdir]
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(167)) - 
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(170)) -Cleanup Commands: [-fs 
hdfs://localhost:52345 -rm -R /ecdir]
2017-12-18 09:25:44,821 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(174)) - 
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(178)) -  Comparator: 
[SubstringComparator]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(180)) -  Comparision result:   [fail]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(182)) - Expected output:   
[Warning: setting erasure coding policy on an non-empty directory will not 
automatically convert existing data to RS-6-3-1024]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(184)) -   Actual output:   [Set 
erasure coding policy RS-6-3-1024k on /ecdir
Warning: setting erasure coding policy on a non-empty directory will not 
automatically convert existing files to RS-6-3-1024k
]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(187)) - 
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(156)) - 
---
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(157)) - Test ID: [17]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(158)) -Test Description: 
[unsetPolicy : unset policy on non-empty directory]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(159)) - 
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
hdfs://localhost:52345 -mkdir /ecdir]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
hdfs://localhost:52345 -setPolicy -policy RS-6-3-1024k -path /ecdir]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
hdfs://localhost:52345 -touchz /ecdir/file1]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(163)) -   Test Commands: [-fs 
hdfs://localhost:52345 -unsetPolicy -path /ecdir]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(167)) - 
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(170)) -Cleanup Commands: [-fs 
hdfs://localhost:52345 -rm -R /ecdir]
2017-12-18 09:25:44,822 [Thread-0] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(174)) - 
2017-12-18 

[jira] [Created] (HDFS-12937) RBF: Add more unit test for router admin commands

2017-12-18 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12937:


 Summary: RBF: Add more unit test for router admin commands
 Key: HDFS-12937
 URL: https://issues.apache.org/jira/browse/HDFS-12937
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 3.0.0
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Adding more unit tests to ensure that router admin commands works well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-12-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/72/

No changes




-1 overall


The following subsystems voted -1:
asflicense unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Unreaped Processes :

   hadoop-common:1 
   hadoop-hdfs:20 
   bkjournal:5 
   hadoop-yarn-server-timelineservice:1 
   hadoop-yarn-client:4 
   hadoop-yarn-applications-distributedshell:1 
   hadoop-mapreduce-client-jobclient:12 
   hadoop-distcp:5 
   hadoop-archives:1 
   hadoop-extras:1 

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   
hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels 
   hadoop.mapreduce.lib.input.TestMultipleInputs 
   hadoop.mapreduce.lib.output.TestJobOutputCommitter 
   hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator 
   hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities 
   hadoop.mapreduce.lib.input.TestMRSequenceFileInputFilter 
   hadoop.mapreduce.lib.input.TestNLineInputFormat 
   hadoop.mapreduce.v2.TestNonExistentJob 
   hadoop.mapreduce.lib.input.TestMRSequenceFileAsBinaryInputFormat 
   hadoop.mapreduce.v2.TestMRAppWithCombiner 
   hadoop.mapreduce.TestLocalRunner 
   hadoop.mapreduce.v2.TestUberAM 
   hadoop.mapreduce.v2.TestMRJobsWithProfiler 
   hadoop.mapreduce.lib.output.TestMRMultipleOutputs 
   hadoop.mapreduce.TestMapperReducerCleanup 
   hadoop.mapreduce.v2.TestRMNMInfo 
   hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp 
   hadoop.mapreduce.v2.TestSpeculativeExecution 
   hadoop.mapreduce.v2.TestMROldApiJobs 
   hadoop.mapreduce.TestValueIterReset 
   hadoop.mapreduce.lib.input.TestLineRecordReaderJobs 
   hadoop.mapreduce.v2.TestMRJobsWithHistoryService 
   hadoop.mapreduce.lib.chain.TestChainErrors 
   hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat 
   hadoop.mapreduce.lib.fieldsel.TestMRFieldSelection 
   hadoop.mapreduce.lib.output.TestMRSequenceFileAsBinaryOutputFormat 
   hadoop.mapreduce.lib.aggregate.TestMapReduceAggregates 
   hadoop.mapreduce.TestLargeSort 
   hadoop.mapreduce.lib.input.TestMRSequenceFileAsTextInputFormat 
   hadoop.mapreduce.lib.chain.TestMapReduceChain 
   hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat 
   hadoop.mapreduce.lib.output.TestMRCJCFileOutputCommitter 
   hadoop.mapreduce.lib.chain.TestSingleElementChain 
   hadoop.tools.TestDistCpSystem 
   hadoop.tools.TestIntegration 
   hadoop.tools.TestDistCpViewFs 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 

Timed out junit tests :

   org.apache.hadoop.log.TestLogLevel 
   org.apache.hadoop.hdfs.TestLeaseRecovery2 
   org.apache.hadoop.hdfs.TestDatanodeRegistration 
   org.apache.hadoop.hdfs.TestRead 
   org.apache.hadoop.hdfs.web.TestWebHdfsTokens 
   org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream 
   org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade 
   org.apache.hadoop.hdfs.TestFileAppendRestart 
   org.apache.hadoop.hdfs.security.TestDelegationToken 
   org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter 
   org.apache.hadoop.hdfs.TestDFSOutputStream 
   org.apache.hadoop.hdfs.TestDatanodeReport 
   org.apache.hadoop.hdfs.web.TestWebHDFS 
   org.apache.hadoop.hdfs.web.TestWebHDFSXAttr 
   org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes 
   org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs 
   org.apache.hadoop.hdfs.TestDistributedFileSystem 
   org.apache.hadoop.hdfs.web.TestWebHDFSForHA 
   org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication 
   org.apache.hadoop.hdfs.TestDFSShell 
   org.apache.hadoop.hdfs.web.TestWebHDFSAcl 
   org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead 
   
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServices
 
   org.apache.hadoop.yarn.client.TestRMFailover 
   org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA 
   

[jira] [Resolved] (HDFS-12936) java.lang.OutOfMemoryError: unable to create new native thread

2017-12-18 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-12936.

Resolution: Not A Bug

> java.lang.OutOfMemoryError: unable to create new native thread
> --
>
> Key: HDFS-12936
> URL: https://issues.apache.org/jira/browse/HDFS-12936
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
> Environment: CDH5.12
> hadoop2.6
>Reporter: Jepson
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> I configure the max user processes  65535 with any user ,and the datanode 
> memory is 8G.
> When a log of data was been writeen,the datanode was been shutdown.
> But I can see the memory use only < 1000M.
> Please to see https://pan.baidu.com/s/1o7BE0cy
> *DataNode shutdown error log:*  
> {code:java}
> 2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2017-12-17 23:58:31,425 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:01,426 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:05,520 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 
> src: /192.168.17.54:40478 dest: /192.168.17.48:50010
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12936) java.lang.OutOfMemoryError: unable to create new native thread

2017-12-18 Thread Jepson (JIRA)
Jepson created HDFS-12936:
-

 Summary: java.lang.OutOfMemoryError: unable to create new native 
thread
 Key: HDFS-12936
 URL: https://issues.apache.org/jira/browse/HDFS-12936
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
 Environment: CDH5.12
hadoop2.6
Reporter: Jepson


*DataNode Error log:* 
{code:java}
2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
PacketResponder: 
BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, 
type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2017-12-17 23:58:31,425 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:01,426 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:05,520 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 src: 
/192.168.17.54:40478 dest: /192.168.17.48:50010

{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org