RE: Different JIRA permissions for HADOOP and HDFS

2016-06-20 Thread Zheng, Kai
Yeah, this would be great, so some guys like me won't need to trouble you 
asking the question again and again :). Thanks a lot.

Regards,
Kai

-Original Message-
From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp] 
Sent: Monday, June 20, 2016 3:17 PM
To: Zheng, Kai ; common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

There is no doc.

1. Login to ASF JIRA
2. Go to the project page (e.g. 
https://issues.apache.org/jira/browse/HADOOP ) 3. Hit "Administration" tab 4. 
Hit "Roles" tab in left side 5. Add administrators/committers/contributors

I'll document this in https://wiki.apache.org/hadoop/HowToCommit

Regards,
Akira

On 6/20/16 16:08, Zheng, Kai wrote:
> Thanks Akira for the nice info. So where is the link to do it or any how to 
> doc? Sorry I browsed the existing wiki doc but didn't find how to add 
> contributors.
>
> Regards,
> Kai
>
> -Original Message-
> From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp]
> Sent: Monday, June 20, 2016 12:22 PM
> To: Zheng, Kai ; common-dev@hadoop.apache.org
> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>
> Yes, the role allows committers to add/remove all the roles.
>
> Now about 400 accounts have contributors roles in Hadoop common, and about 
> 1000 contributors in history.
>
> Regards,
> Akira
>
> On 6/19/16 19:43, Zheng, Kai wrote:
>> Thanks Akira for the work.
>>
>> What the committer role can do in addition to the committing codes? Can the 
>> role allow to add/remove a contributor? As I said in my last email, I want 
>> to have some contributor(s) back and may add more in some time later.
>>
>> Not sure if we need to clean up long time no active contributors. It may be 
>> nice to know how many contributors the project has in its history. If the 
>> list is too long, maybe we can put them in another list, like 
>> OLD_CONTRIBUTORS.
>>
>> Regards,
>> Kai
>>
>> -Original Message-
>> From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp]
>> Sent: Saturday, June 18, 2016 12:56 PM
>> To: Zheng, Kai ; common-dev@hadoop.apache.org
>> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>>
>> I'm doing the following steps to reduce the number of contributors:
>>
>> 1. Find committers who have only contributor role 2. Add them into 
>> committer role 3. Remove them from contributor role
>>
>> However, this is a temporary solution.
>> Probably we need to do one of the followings in the near future.
>>
>> * Create contributor2 role to increase the limit
>> * Remove contributors who have not been active for a long time
>>
>> Regards,
>> Akira
>>
>> On 6/18/16 10:24, Zheng, Kai wrote:
>>> Hi Akira,
>>>
>>> Some contributors (not committer) I know were found lost and we can't 
>>> assign tasks to. Any way I can add them or have to trouble others for that 
>>> each time when there is a new one? Thanks!
>>>
>>> Regards,
>>> Kai
>>>
>>> -Original Message-
>>> From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp]
>>> Sent: Monday, June 06, 2016 12:47 AM
>>> To: common-dev@hadoop.apache.org
>>> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>>>
>>> Now I can't add any more contributors in HADOOP Common, so I'll remove the 
>>> contributors who have committers role to make the group smaller.
>>> Please tell me if you have lost your roles by mistake.
>>>
>>> Regards,
>>> Akira
>>>
>>> On 5/18/16 13:48, Akira AJISAKA wrote:
 In HADOOP/HDFS/MAPREDUCE/YARN, I removed the administrators from 
 contributor group. After that, added Varun into contributor roles.
 # Ray is already added into contributor roles.

 Hi contributors/committers, please tell me if you have lost your 
 roles by mistake.

> just remove a big chunk of the committers from all the lists
 In Apache Hadoop Project bylaws, "A Committer is considered 
 emeritus by their own declaration or by not contributing in any 
 form to the project for over six months." Therefore we can remove 
 them from the list, but I'm thinking this is the last option.

 Regards,
 Akira

 On 5/18/16 09:07, Allen Wittenauer wrote:
>
> We should probably just remove a big chunk of the committers from 
> all the lists.  Most of them have disappeared from Hadoop anyway.
> (The 55% growth in JIRA issues in patch available state in the 
> past year alone a pretty good testament to that fact.)
>
>> On May 17, 2016, at 4:40 PM, Akira Ajisaka  wrote:
>>
>>> Is there some way for us to add a "Contributors2" group with the 
>>> same permissions as a workaround?  Or we could try to clean out 
>>> contributors who are no longer active, but that might be hard to figure 
>>> out.
>>
>> Contributors2 seems fine. AFAIK, committers sometimes cleaned out 
>> contributors who are no longer active.
>> 

[jira] [Created] (HADOOP-13302) Remove unused variable in TestRMWebServicesForCSWithPartitions#setupQueueConfiguration

2016-06-20 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-13302:
--

 Summary: Remove unused variable in 
TestRMWebServicesForCSWithPartitions#setupQueueConfiguration
 Key: HADOOP-13302
 URL: https://issues.apache.org/jira/browse/HADOOP-13302
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Akira AJISAKA
Priority: Minor


{code}
  private static void setupQueueConfiguration(
  CapacitySchedulerConfiguration config, ResourceManager resourceManager) {
{code}
{{resourceManager}} is not used, so it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Merge Jersey upgrade(HADOOP-9613) to trunk

2016-06-20 Thread Tsuyoshi Ozawa
Thanks all for your reply!

Based on discussion here, I merged HADOOP-9613 to
trunk after the reviews. Thank you for the great helps. The problems
reported on HADOOP-9613 by Wangda, Li, and Sunil is filed as
YARN-5275. I'll also take a look at it.

I believe this change is a great step for compiling Hadoop source code
with JDK8. Again, thanks a lot!

Best,
- Tsuyoshi

On Tue, May 10, 2016 at 12:37 PM, Ravi Prakash  wrote:
> +1. Awesome effort Tsuyoshi. This has been blocking compression on the wire
> too.
>
> On Tue, May 10, 2016 at 11:24 AM, Colin McCabe  wrote:
>
>> +1 for updating this in trunk.  Thanks, Tsuyoshi Ozawa.
>>
>> cheers,
>> Colin
>>
>> On Mon, May 9, 2016, at 12:12, Tsuyoshi Ozawa wrote:
>> > Hi developers,
>> >
>> > We’ve worked on upgrading jersey(HADOOP-9613) for a years. It's
>> > essential change to support complication with JDK8. It’s almost there.
>> >
>> > One concern to merge this to trunk is incompatibility. After the
>> > release of Jersey 1.13, the root element whose content is empty
>> > collection is changed from null to empty object({}).  Because of this
>> > problem, I’ve marked HADOOP-9613 as incompatible change. Is it
>> > acceptable for us? If it’s acceptable change in trunk, I’d like to
>> > merge it into trunk.
>> >
>> > Thanks,
>> > - Tsuyoshi
>> >
>> > -
>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13301) Millisecond timestamp for FsShell console log

2016-06-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13301:
---

 Summary: Millisecond timestamp for FsShell console log
 Key: HADOOP-13301
 URL: https://issues.apache.org/jira/browse/HADOOP-13301
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Trivial
 Fix For: 3.0.0-alpha1


The log message timestamp on FsShell console show only seconds. 
{noformat}
$ export HADOOP_ROOT_LOGGER=TRACE,console
$ hdfs dfs -rm -skipTrash /tmp/2G*
16/06/20 16:00:03 DEBUG util.Shell: setsid exited with exit code 0
{noformat}

Would like to see milliseconds for quick performance turning.
{noformat}
2016-06-20 16:01:42,588 DEBUG util.Shell: setsid exited with exit code 0
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13300) maven-jar-plugin executions break build with newer plugin

2016-06-20 Thread Christopher Tubbs (JIRA)
Christopher Tubbs created HADOOP-13300:
--

 Summary: maven-jar-plugin executions break build with newer plugin
 Key: HADOOP-13300
 URL: https://issues.apache.org/jira/browse/HADOOP-13300
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Christopher Tubbs


Several places throughout the Hadoop build (going back at least as far as 
2.4.1; I didn't check earlier), extra executions of maven-jar-plugin have been 
used to create jars at different phases in the build lifecycle.

These have typically not specified an execution id, but should have specified 
"default-jar" to override the default execution of maven-jar-plugin. They have 
worked in the past because maven-jar-plugin didn't check to verify if an 
artifact was built/attached multiple times (without using a classifier), but 
will not work when a newer version of maven-jar-plugin is used (>3.0), which is 
more strict about checking.

This is a problem for any downstream packagers which are using newer versions 
of build plugins (due to dependency convergence) and will become a problem when 
Hadoop moves to a newer version of the jar plugin (with ASF Parent POM 18, for 
example).

[These are the ones I've 
found|https://lists.apache.org/thread.html/2c9d9ea5448a3ed22743916d20e40a9e589bfa383c8ea65f35cb3f0d@%3Cuser.hadoop.apache.org%3E].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13299) JMXJsonServlet is vulnerable to TRACE

2016-06-20 Thread Haibo Chen (JIRA)
Haibo Chen created HADOOP-13299:
---

 Summary: JMXJsonServlet is vulnerable to TRACE 
 Key: HADOOP-13299
 URL: https://issues.apache.org/jira/browse/HADOOP-13299
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haibo Chen
Assignee: Haibo Chen
Priority: Minor


Nessus scan shows that JMXJsonServlet is vulnerable to TRACE/TRACK requests.  
We could disable this to avoid such vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: hadoop-build-tools/src/main/resources/META-INF/

2016-06-20 Thread Xiao Chen
FYI - created https://issues.apache.org/jira/browse/HADOOP-13298.

-Xiao

On Mon, Jun 20, 2016 at 12:03 PM, Sean Busbey  wrote:

> file a jira please and I'll take a look.
>
> On Fri, Jun 17, 2016 at 4:10 PM, Xiao Chen  wrote:
> > Thanks Steve for reporting the issue and Sean for the suggestion. This is
> > indeed from HADOOP-12893 (blush).
> >
> > I'm no maven expert so appreciate any recommendations.
> >
> > The reason for the current way is that for the L to be patched into a
> jar,
> > it seems that maven remote resource plugin (which named itself to be the
> > typical Apache licensing way) requires the files to be under
> > src/main/resources. This was mentioned in their example, and I wasn't
> able
> > to trick it to pack things not in there. I wish there were more examples
> to
> > help in our case.
> >
> > So, in HADOOP-12893 I put a step to copy the L into
> > hadoop-build-tools/src/main/resources dir, to allow it get packed into
> the
> > jar. I thought about symlink but don't think it's a good way for Windows
> > builds.
> >
> > It's not committed because we don't want an extra copy of L, we could
> list
> > it in .gitignore.
> >
> >
> > P.S. Tried a bit with Sean's suggestion of making it under
> > target/generated-sources, but couldn't get the plugin to include it. I'm
> > happy to try out more elegant solutions if you have any suggestions.
> >
> > Thanks!
> >
> >
> > -Xiao
> >
> > On Fri, Jun 17, 2016 at 7:34 AM, Sean Busbey 
> wrote:
> >>
> >> If it's generated and we're following The Maven Way, it should be in
> >> target. probably in target/generated-sources
> >>
> >> On Fri, Jun 17, 2016 at 9:33 AM, Steve Loughran  >
> >> wrote:
> >> >
> >> > I see (presumably from the licensing work), that I'm now getting
> >> > hadoop-build-tools/src/main/resources/META-INF/ as an untracked
> directory.
> >> >
> >> > If this is generated, should it be in the source tree? And if so,
> should
> >> > it be committed, or listed in .gitignore?
> >> >
> >> > -
> >> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >> >
> >>
> >>
> >>
> >> --
> >> busbey
> >>
> >> -
> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >>
> >
>
>
>
> --
> busbey
>


[jira] [Created] (HADOOP-13298) Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/

2016-06-20 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13298:
--

 Summary: Fix the leftover L files in 
hadoop-build-tools/src/main/resources/META-INF/
 Key: HADOOP-13298
 URL: https://issues.apache.org/jira/browse/HADOOP-13298
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
Reporter: Xiao Chen
Assignee: Sean Busbey


After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
{{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
remove it and do it the maven way.

Details in 
https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/browser

Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
the help!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: hadoop-build-tools/src/main/resources/META-INF/

2016-06-20 Thread Sean Busbey
file a jira please and I'll take a look.

On Fri, Jun 17, 2016 at 4:10 PM, Xiao Chen  wrote:
> Thanks Steve for reporting the issue and Sean for the suggestion. This is
> indeed from HADOOP-12893 (blush).
>
> I'm no maven expert so appreciate any recommendations.
>
> The reason for the current way is that for the L to be patched into a jar,
> it seems that maven remote resource plugin (which named itself to be the
> typical Apache licensing way) requires the files to be under
> src/main/resources. This was mentioned in their example, and I wasn't able
> to trick it to pack things not in there. I wish there were more examples to
> help in our case.
>
> So, in HADOOP-12893 I put a step to copy the L into
> hadoop-build-tools/src/main/resources dir, to allow it get packed into the
> jar. I thought about symlink but don't think it's a good way for Windows
> builds.
>
> It's not committed because we don't want an extra copy of L, we could list
> it in .gitignore.
>
>
> P.S. Tried a bit with Sean's suggestion of making it under
> target/generated-sources, but couldn't get the plugin to include it. I'm
> happy to try out more elegant solutions if you have any suggestions.
>
> Thanks!
>
>
> -Xiao
>
> On Fri, Jun 17, 2016 at 7:34 AM, Sean Busbey  wrote:
>>
>> If it's generated and we're following The Maven Way, it should be in
>> target. probably in target/generated-sources
>>
>> On Fri, Jun 17, 2016 at 9:33 AM, Steve Loughran 
>> wrote:
>> >
>> > I see (presumably from the licensing work), that I'm now getting
>> > hadoop-build-tools/src/main/resources/META-INF/ as an untracked directory.
>> >
>> > If this is generated, should it be in the source tree? And if so, should
>> > it be committed, or listed in .gitignore?
>> >
>> > -
>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >
>>
>>
>>
>> --
>> busbey
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>



-- 
busbey

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13297) hadoop-common module depends on hadoop-build-tools module, but the modules are not ordered correctly

2016-06-20 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-13297:
--

 Summary: hadoop-common module depends on hadoop-build-tools 
module, but the modules are not ordered correctly
 Key: HADOOP-13297
 URL: https://issues.apache.org/jira/browse/HADOOP-13297
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira AJISAKA


After HADOOP-12893, we are seeing {{mvn install -DskipTests}} failing in 
branch-2.7, branch-2.7.3, and branch-2.6. This failure is caused by the 
followings
* hadoop-project module depends on hadoop-build-tools module, but 
hadoop-project module does not declare hadoop-build-tools as its submodule. 
Therefore, hadoop-build-tools is not built before building hadoop-project.
* hadoop-build-tools pom and jar are not uploaded to the snapshot repository 
(https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/)

The build failure occurs if the *both* of the above conditions are satisfied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13296) Cleanup javadoc for Path

2016-06-20 Thread Daniel Templeton (JIRA)
Daniel Templeton created HADOOP-13296:
-

 Summary: Cleanup javadoc for Path
 Key: HADOOP-13296
 URL: https://issues.apache.org/jira/browse/HADOOP-13296
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Daniel Templeton
Assignee: Daniel Templeton
Priority: Minor


The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-06-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/69/

[Jun 20, 2016 12:44:54 AM] (junping_du) YARN-5246. NMWebAppFilter web redirects 
drop query parameters.




-1 overall


The following subsystems voted -1:
unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   

[jira] [Created] (HADOOP-13292) Erasure Code misfunctions when 3 DataNode down

2016-06-20 Thread gao shan (JIRA)
gao shan created HADOOP-13292:
-

 Summary: Erasure Code misfunctions when 3 DataNode down
 Key: HADOOP-13292
 URL: https://issues.apache.org/jira/browse/HADOOP-13292
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: HDFS-7285
 Environment: 9 DataNode and 1 NameNode,Erasured code policy is set 
as "6--3",   When 3 DataNode down,  erasured code fails and an exception is 
thrown
Reporter: gao shan


The following is the steps to reproduce:

1) hadoop fs -mkdir /ec
2) set erasured code policy as "6-3"
3) "write" data by : 

time hadoop jar 
/opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
  TestDFSIO -D test.build.data=/ec -write -nrFiles 30 -fileSize 12288 
-bufferSize 1073741824

4) Manually down 3 nodes.  Kill the threads of  "datanode" and "nodemanager" in 
3 DataNode.

5) By using erasured code to "read" data by:

time hadoop jar 
/opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
  TestDFSIO -D test.build.data=/ec -read -nrFiles 30 -fileSize 12288 
-bufferSize 1073741824


then the failure occurs and the exception is thrown as:

INFO mapreduce.Job: Task Id : attempt_1465445965249_0008_m_34_2, Status : 
FAILED
Error: java.io.IOException: 4 missing blocks, the stripe is: Offset=0, 
length=8388608, fetchedChunksNum=0, missingChunksNum=4
at 
org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.checkMissingBlocks(DFSStripedInputStream.java:614)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readParityChunks(DFSStripedInputStream.java:647)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:762)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:316)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:450)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:941)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:531)
at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:508)
at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:134)
at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Different JIRA permissions for HADOOP and HDFS

2016-06-20 Thread Akira AJISAKA

There is no doc.

1. Login to ASF JIRA
2. Go to the project page (e.g. 
https://issues.apache.org/jira/browse/HADOOP )

3. Hit "Administration" tab
4. Hit "Roles" tab in left side
5. Add administrators/committers/contributors

I'll document this in https://wiki.apache.org/hadoop/HowToCommit

Regards,
Akira

On 6/20/16 16:08, Zheng, Kai wrote:

Thanks Akira for the nice info. So where is the link to do it or any how to 
doc? Sorry I browsed the existing wiki doc but didn't find how to add 
contributors.

Regards,
Kai

-Original Message-
From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp]
Sent: Monday, June 20, 2016 12:22 PM
To: Zheng, Kai ; common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

Yes, the role allows committers to add/remove all the roles.

Now about 400 accounts have contributors roles in Hadoop common, and about 1000 
contributors in history.

Regards,
Akira

On 6/19/16 19:43, Zheng, Kai wrote:

Thanks Akira for the work.

What the committer role can do in addition to the committing codes? Can the 
role allow to add/remove a contributor? As I said in my last email, I want to 
have some contributor(s) back and may add more in some time later.

Not sure if we need to clean up long time no active contributors. It may be 
nice to know how many contributors the project has in its history. If the list 
is too long, maybe we can put them in another list, like OLD_CONTRIBUTORS.

Regards,
Kai

-Original Message-
From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp]
Sent: Saturday, June 18, 2016 12:56 PM
To: Zheng, Kai ; common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

I'm doing the following steps to reduce the number of contributors:

1. Find committers who have only contributor role 2. Add them into
committer role 3. Remove them from contributor role

However, this is a temporary solution.
Probably we need to do one of the followings in the near future.

* Create contributor2 role to increase the limit
* Remove contributors who have not been active for a long time

Regards,
Akira

On 6/18/16 10:24, Zheng, Kai wrote:

Hi Akira,

Some contributors (not committer) I know were found lost and we can't assign 
tasks to. Any way I can add them or have to trouble others for that each time 
when there is a new one? Thanks!

Regards,
Kai

-Original Message-
From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp]
Sent: Monday, June 06, 2016 12:47 AM
To: common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

Now I can't add any more contributors in HADOOP Common, so I'll remove the 
contributors who have committers role to make the group smaller.
Please tell me if you have lost your roles by mistake.

Regards,
Akira

On 5/18/16 13:48, Akira AJISAKA wrote:

In HADOOP/HDFS/MAPREDUCE/YARN, I removed the administrators from
contributor group. After that, added Varun into contributor roles.
# Ray is already added into contributor roles.

Hi contributors/committers, please tell me if you have lost your
roles by mistake.


just remove a big chunk of the committers from all the lists

In Apache Hadoop Project bylaws, "A Committer is considered emeritus
by their own declaration or by not contributing in any form to the
project for over six months." Therefore we can remove them from the
list, but I'm thinking this is the last option.

Regards,
Akira

On 5/18/16 09:07, Allen Wittenauer wrote:


We should probably just remove a big chunk of the committers from
all the lists.  Most of them have disappeared from Hadoop anyway.
(The 55% growth in JIRA issues in patch available state in the past
year alone a pretty good testament to that fact.)


On May 17, 2016, at 4:40 PM, Akira Ajisaka  wrote:


Is there some way for us to add a "Contributors2" group with the
same permissions as a workaround?  Or we could try to clean out
contributors who are no longer active, but that might be hard to figure out.


Contributors2 seems fine. AFAIK, committers sometimes cleaned out
contributors who are no longer active.
http://search-hadoop.com/m/uOzYt77s6mnzcRu1/v=threaded

Another option: Can we remove committers from contributor group to
reduce the number of contributors? I've already removed myself
from contributor group and it works well.

Regards,
Akira

On 5/18/16 03:16, Robert Kanter wrote:

We've also had a related long-standing issue (or at least I have)
where I can't add any more contributors to HADOOP or HDFS because
JIRA times out on looking up their username.  I'm guessing we
have too many contributors for those projects.  I bet YARN and MAPREDUCE are 
close.
Is there some way for us to add a "Contributors2" group with the
same permissions as a workaround?  Or we could try to clean out
contributors who are no longer active, but that might be hard to figure out.

- Robert

On Tue, May 17, 2016 at 11:12 AM, Ray Chiang

RE: Different JIRA permissions for HADOOP and HDFS

2016-06-20 Thread Zheng, Kai
Thanks Akira for the nice info. So where is the link to do it or any how to 
doc? Sorry I browsed the existing wiki doc but didn't find how to add 
contributors.

Regards,
Kai

-Original Message-
From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp] 
Sent: Monday, June 20, 2016 12:22 PM
To: Zheng, Kai ; common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

Yes, the role allows committers to add/remove all the roles.

Now about 400 accounts have contributors roles in Hadoop common, and about 1000 
contributors in history.

Regards,
Akira

On 6/19/16 19:43, Zheng, Kai wrote:
> Thanks Akira for the work.
>
> What the committer role can do in addition to the committing codes? Can the 
> role allow to add/remove a contributor? As I said in my last email, I want to 
> have some contributor(s) back and may add more in some time later.
>
> Not sure if we need to clean up long time no active contributors. It may be 
> nice to know how many contributors the project has in its history. If the 
> list is too long, maybe we can put them in another list, like 
> OLD_CONTRIBUTORS.
>
> Regards,
> Kai
>
> -Original Message-
> From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp]
> Sent: Saturday, June 18, 2016 12:56 PM
> To: Zheng, Kai ; common-dev@hadoop.apache.org
> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>
> I'm doing the following steps to reduce the number of contributors:
>
> 1. Find committers who have only contributor role 2. Add them into 
> committer role 3. Remove them from contributor role
>
> However, this is a temporary solution.
> Probably we need to do one of the followings in the near future.
>
> * Create contributor2 role to increase the limit
> * Remove contributors who have not been active for a long time
>
> Regards,
> Akira
>
> On 6/18/16 10:24, Zheng, Kai wrote:
>> Hi Akira,
>>
>> Some contributors (not committer) I know were found lost and we can't assign 
>> tasks to. Any way I can add them or have to trouble others for that each 
>> time when there is a new one? Thanks!
>>
>> Regards,
>> Kai
>>
>> -Original Message-
>> From: Akira AJISAKA [mailto:ajisa...@oss.nttdata.co.jp]
>> Sent: Monday, June 06, 2016 12:47 AM
>> To: common-dev@hadoop.apache.org
>> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>>
>> Now I can't add any more contributors in HADOOP Common, so I'll remove the 
>> contributors who have committers role to make the group smaller.
>> Please tell me if you have lost your roles by mistake.
>>
>> Regards,
>> Akira
>>
>> On 5/18/16 13:48, Akira AJISAKA wrote:
>>> In HADOOP/HDFS/MAPREDUCE/YARN, I removed the administrators from 
>>> contributor group. After that, added Varun into contributor roles.
>>> # Ray is already added into contributor roles.
>>>
>>> Hi contributors/committers, please tell me if you have lost your 
>>> roles by mistake.
>>>
 just remove a big chunk of the committers from all the lists
>>> In Apache Hadoop Project bylaws, "A Committer is considered emeritus 
>>> by their own declaration or by not contributing in any form to the 
>>> project for over six months." Therefore we can remove them from the 
>>> list, but I'm thinking this is the last option.
>>>
>>> Regards,
>>> Akira
>>>
>>> On 5/18/16 09:07, Allen Wittenauer wrote:

 We should probably just remove a big chunk of the committers from 
 all the lists.  Most of them have disappeared from Hadoop anyway.
 (The 55% growth in JIRA issues in patch available state in the past 
 year alone a pretty good testament to that fact.)

> On May 17, 2016, at 4:40 PM, Akira Ajisaka  wrote:
>
>> Is there some way for us to add a "Contributors2" group with the 
>> same permissions as a workaround?  Or we could try to clean out 
>> contributors who are no longer active, but that might be hard to figure 
>> out.
>
> Contributors2 seems fine. AFAIK, committers sometimes cleaned out 
> contributors who are no longer active.
> http://search-hadoop.com/m/uOzYt77s6mnzcRu1/v=threaded
>
> Another option: Can we remove committers from contributor group to 
> reduce the number of contributors? I've already removed myself 
> from contributor group and it works well.
>
> Regards,
> Akira
>
> On 5/18/16 03:16, Robert Kanter wrote:
>> We've also had a related long-standing issue (or at least I have) 
>> where I can't add any more contributors to HADOOP or HDFS because 
>> JIRA times out on looking up their username.  I'm guessing we 
>> have too many contributors for those projects.  I bet YARN and MAPREDUCE 
>> are close.
>> Is there some way for us to add a "Contributors2" group with the 
>> same permissions as a workaround?  Or we could try to clean out 
>> contributors who are no longer active, but that might be hard to figure 
>> out.
>>