Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Vinayakumar B
Thanks Akira.
On 17 May 2016 10:59, "Akira Ajisaka"  wrote:

> Hi Vinay,
>
> Added you into committer roles for HADOOP/MAPREDUCE/YARN.
>
> Regards,
> Akira
>
> On 5/17/16 13:45, Vinayakumar B wrote:
>
>> Hi Junping,
>>
>> It looks like, I too dont have permissions in projects except HDFS.
>>
>> Please grant me also to the group.
>>
>> Thanks in advance,
>> -Vinay
>> On 17 May 2016 6:10 a.m., "Sangjin Lee"  wrote:
>>
>> Thanks Junping! It seems to work now.
>>
>> On Mon, May 16, 2016 at 5:22 PM, Junping Du  wrote:
>>
>> Someone fix the permission issue so that Administrator, committer and
>>> reporter can edit the issue now.
>>>
>>> Sangjin, it sounds like you were not in JIRA's committer list before and
>>> I
>>> just add you into committer roles for 4 projects. Hope it works for
>>> you now.​
>>>
>>>
>>> Thanks,
>>>
>>>
>>> Junping
>>> --
>>> *From:* sjl...@gmail.com  on behalf of Sangjin Lee <
>>> sj...@apache.org>
>>> *Sent:* Monday, May 16, 2016 11:43 PM
>>> *To:* Zhihai Xu
>>> *Cc:* Junping Du; Arun Suresh; Zheng, Kai; Andrew Wang;
>>> common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
>>>
>>> *Subject:* Re: Different JIRA permissions for HADOOP and HDFS
>>>
>>> I also find myself unable to edit most of the JIRA fields, and that is
>>> across projects (HADOOP, YARN, MAPREDUCE, and HDFS). Commenting and the
>>> workflow buttons seem to work, however.
>>>
>>> On Mon, May 16, 2016 at 8:14 AM, Zhihai Xu  wrote:
>>>
>>> Great, Thanks Junping! Yes, the JIRA assignment works for me now.

 zhihai

 On Mon, May 16, 2016 at 5:29 AM, Junping Du 
 wrote:

 Zhihai, I just set you with committer permissions on MAPREDUCE JIRA.
>
 Would

> you try if the JIRA assignment works now? I cannot help on Hive
>
 project. It

> is better to ask hive project community for help.
> For Arun's problem. from my check, the Edit permission on JIRA only
> authorized to Administrator only. I don't know if this setting is by
> intention but it was not like this previously.
> Can someone who make the change to clarify why we need this change or
> revert to whatever it used to be?
>
> Thanks,
>
> Junping
> 
> From: Arun Suresh 
> Sent: Monday, May 16, 2016 9:42 AM
> To: Zhihai Xu
> Cc: Zheng, Kai; Andrew Wang; common-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org
> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>
> Not sure if this is related.. but It also looks like I am now no longer
> allowed to modify description and headline of JIRAs anymore..
> Would appreciate greatly if someone can help revert this !
>
> Cheers
> -Arun
>
> On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu  wrote:
>
> Currently I also have permission issue to access the JIRA. I can't
>>
> assign

> the JIRA(I created) to myself. For example,
>> https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
>> https://issues.apache.org/jira/browse/HIVE-13760. I can't find the
>>
> button
>
>> to assign the JIRA to myself.
>> I don't have this issue two three weeks ago. Did anything change
>>
> recently?
>
>> Can anyone help me solve this issue?
>>
>> thanks
>> zhihai
>>
>>
>>
>>
>> On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai 
>>
> wrote:
>
>>
>> It works for me now, thanks Andrew!
>>>
>>> Regards,
>>> Kai
>>>
>>> -Original Message-
>>> From: Andrew Wang [mailto:andrew.w...@cloudera.com]
>>> Sent: Monday, May 16, 2016 12:14 AM
>>> To: Zheng, Kai 
>>> Cc: common-dev@hadoop.apache.org
>>> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>>>
>>> I just gave you committer permissions on JIRA, try now?
>>>
>>> On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 
>>>
>> wrote:
>>
>>>
>>> I just ran into the bad situation that I committed HDFS-8449 but

>>> can't
>
>> resolve the issue due to lacking the required permission to me.

>>> Am
>>
>>> not
>
>> sure if it's caused by my setup or environment change (temporally
 working in a new time zone). Would anyone help resolve the issue

>>> for

> me to avoid bad state? Thanks!

 -Original Message-
 From: Zheng, Kai [mailto:kai.zh...@intel.com]
 Sent: Sunday, May 15, 2016 3:20 PM
 To: Allen Wittenauer 
 Cc: common-dev@hadoop.apache.org
 Subject: RE: Different JIRA permissions for HADOOP and HDFS


Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Akira Ajisaka

Hi Vinay,

Added you into committer roles for HADOOP/MAPREDUCE/YARN.

Regards,
Akira

On 5/17/16 13:45, Vinayakumar B wrote:

Hi Junping,

It looks like, I too dont have permissions in projects except HDFS.

Please grant me also to the group.

Thanks in advance,
-Vinay
On 17 May 2016 6:10 a.m., "Sangjin Lee"  wrote:

Thanks Junping! It seems to work now.

On Mon, May 16, 2016 at 5:22 PM, Junping Du  wrote:


Someone fix the permission issue so that Administrator, committer and
reporter can edit the issue now.

Sangjin, it sounds like you were not in JIRA's committer list before and I
just add you into committer roles for 4 projects. Hope it works for
you now.​


Thanks,


Junping
--
*From:* sjl...@gmail.com  on behalf of Sangjin Lee <
sj...@apache.org>
*Sent:* Monday, May 16, 2016 11:43 PM
*To:* Zhihai Xu
*Cc:* Junping Du; Arun Suresh; Zheng, Kai; Andrew Wang;
common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org

*Subject:* Re: Different JIRA permissions for HADOOP and HDFS

I also find myself unable to edit most of the JIRA fields, and that is
across projects (HADOOP, YARN, MAPREDUCE, and HDFS). Commenting and the
workflow buttons seem to work, however.

On Mon, May 16, 2016 at 8:14 AM, Zhihai Xu  wrote:


Great, Thanks Junping! Yes, the JIRA assignment works for me now.

zhihai

On Mon, May 16, 2016 at 5:29 AM, Junping Du  wrote:


Zhihai, I just set you with committer permissions on MAPREDUCE JIRA.

Would

you try if the JIRA assignment works now? I cannot help on Hive

project. It

is better to ask hive project community for help.
For Arun's problem. from my check, the Edit permission on JIRA only
authorized to Administrator only. I don't know if this setting is by
intention but it was not like this previously.
Can someone who make the change to clarify why we need this change or
revert to whatever it used to be?

Thanks,

Junping

From: Arun Suresh 
Sent: Monday, May 16, 2016 9:42 AM
To: Zhihai Xu
Cc: Zheng, Kai; Andrew Wang; common-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

Not sure if this is related.. but It also looks like I am now no longer
allowed to modify description and headline of JIRAs anymore..
Would appreciate greatly if someone can help revert this !

Cheers
-Arun

On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu  wrote:


Currently I also have permission issue to access the JIRA. I can't

assign

the JIRA(I created) to myself. For example,
https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
https://issues.apache.org/jira/browse/HIVE-13760. I can't find the

button

to assign the JIRA to myself.
I don't have this issue two three weeks ago. Did anything change

recently?

Can anyone help me solve this issue?

thanks
zhihai




On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai 

wrote:



It works for me now, thanks Andrew!

Regards,
Kai

-Original Message-
From: Andrew Wang [mailto:andrew.w...@cloudera.com]
Sent: Monday, May 16, 2016 12:14 AM
To: Zheng, Kai 
Cc: common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

I just gave you committer permissions on JIRA, try now?

On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 

wrote:



I just ran into the bad situation that I committed HDFS-8449 but

can't

resolve the issue due to lacking the required permission to me.

Am

not

sure if it's caused by my setup or environment change (temporally
working in a new time zone). Would anyone help resolve the issue

for

me to avoid bad state? Thanks!

-Original Message-
From: Zheng, Kai [mailto:kai.zh...@intel.com]
Sent: Sunday, May 15, 2016 3:20 PM
To: Allen Wittenauer 
Cc: common-dev@hadoop.apache.org
Subject: RE: Different JIRA permissions for HADOOP and HDFS

Thanks Allen for illustrating this in details. I understand. The

left

question is, is it intended only JIRA owner (not sure about admin
users) can do the operations like updating a patch?

Regards,
Kai

-Original Message-
From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
Sent: Saturday, May 14, 2016 9:38 AM
To: Zheng, Kai 
Cc: common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS



On May 14, 2016, at 7:07 AM, Zheng, Kai 

wrote:


Hi,

Noticed this difference but not sure if it’s intended. YARN is
similar

with HDFS. It’s not convenient. Any clarifying?


Under JIRA, different projects (e.g., HADOOP, YARN,

MAPREDUCE,

HDFS, YETUS, HBASE, ACCUMULO, etc) may have different settings.

At

one point in time, all of the Hadoop subprojects were under one

JIRA

project (HADOOP). But then a bunch of folks decided they didn’t


Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Vinayakumar B
Hi Junping,

It looks like, I too dont have permissions in projects except HDFS.

Please grant me also to the group.

Thanks in advance,
-Vinay
On 17 May 2016 6:10 a.m., "Sangjin Lee"  wrote:

Thanks Junping! It seems to work now.

On Mon, May 16, 2016 at 5:22 PM, Junping Du  wrote:

> Someone fix the permission issue so that Administrator, committer and
> reporter can edit the issue now.
>
> Sangjin, it sounds like you were not in JIRA's committer list before and I
> just add you into committer roles for 4 projects. Hope it works for
> you now.​
>
>
> Thanks,
>
>
> Junping
> --
> *From:* sjl...@gmail.com  on behalf of Sangjin Lee <
> sj...@apache.org>
> *Sent:* Monday, May 16, 2016 11:43 PM
> *To:* Zhihai Xu
> *Cc:* Junping Du; Arun Suresh; Zheng, Kai; Andrew Wang;
> common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
>
> *Subject:* Re: Different JIRA permissions for HADOOP and HDFS
>
> I also find myself unable to edit most of the JIRA fields, and that is
> across projects (HADOOP, YARN, MAPREDUCE, and HDFS). Commenting and the
> workflow buttons seem to work, however.
>
> On Mon, May 16, 2016 at 8:14 AM, Zhihai Xu  wrote:
>
>> Great, Thanks Junping! Yes, the JIRA assignment works for me now.
>>
>> zhihai
>>
>> On Mon, May 16, 2016 at 5:29 AM, Junping Du  wrote:
>>
>> > Zhihai, I just set you with committer permissions on MAPREDUCE JIRA.
>> Would
>> > you try if the JIRA assignment works now? I cannot help on Hive
>> project. It
>> > is better to ask hive project community for help.
>> > For Arun's problem. from my check, the Edit permission on JIRA only
>> > authorized to Administrator only. I don't know if this setting is by
>> > intention but it was not like this previously.
>> > Can someone who make the change to clarify why we need this change or
>> > revert to whatever it used to be?
>> >
>> > Thanks,
>> >
>> > Junping
>> > 
>> > From: Arun Suresh 
>> > Sent: Monday, May 16, 2016 9:42 AM
>> > To: Zhihai Xu
>> > Cc: Zheng, Kai; Andrew Wang; common-dev@hadoop.apache.org;
>> > yarn-...@hadoop.apache.org
>> > Subject: Re: Different JIRA permissions for HADOOP and HDFS
>> >
>> > Not sure if this is related.. but It also looks like I am now no longer
>> > allowed to modify description and headline of JIRAs anymore..
>> > Would appreciate greatly if someone can help revert this !
>> >
>> > Cheers
>> > -Arun
>> >
>> > On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu  wrote:
>> >
>> > > Currently I also have permission issue to access the JIRA. I can't
>> assign
>> > > the JIRA(I created) to myself. For example,
>> > > https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
>> > > https://issues.apache.org/jira/browse/HIVE-13760. I can't find the
>> > button
>> > > to assign the JIRA to myself.
>> > > I don't have this issue two three weeks ago. Did anything change
>> > recently?
>> > > Can anyone help me solve this issue?
>> > >
>> > > thanks
>> > > zhihai
>> > >
>> > >
>> > >
>> > >
>> > > On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai 
>> > wrote:
>> > >
>> > > > It works for me now, thanks Andrew!
>> > > >
>> > > > Regards,
>> > > > Kai
>> > > >
>> > > > -Original Message-
>> > > > From: Andrew Wang [mailto:andrew.w...@cloudera.com]
>> > > > Sent: Monday, May 16, 2016 12:14 AM
>> > > > To: Zheng, Kai 
>> > > > Cc: common-dev@hadoop.apache.org
>> > > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
>> > > >
>> > > > I just gave you committer permissions on JIRA, try now?
>> > > >
>> > > > On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 
>> > > wrote:
>> > > >
>> > > > > I just ran into the bad situation that I committed HDFS-8449 but
>> > can't
>> > > > > resolve the issue due to lacking the required permission to me.
Am
>> > not
>> > > > > sure if it's caused by my setup or environment change (temporally
>> > > > > working in a new time zone). Would anyone help resolve the issue
>> for
>> > > > > me to avoid bad state? Thanks!
>> > > > >
>> > > > > -Original Message-
>> > > > > From: Zheng, Kai [mailto:kai.zh...@intel.com]
>> > > > > Sent: Sunday, May 15, 2016 3:20 PM
>> > > > > To: Allen Wittenauer 
>> > > > > Cc: common-dev@hadoop.apache.org
>> > > > > Subject: RE: Different JIRA permissions for HADOOP and HDFS
>> > > > >
>> > > > > Thanks Allen for illustrating this in details. I understand. The
>> left
>> > > > > question is, is it intended only JIRA owner (not sure about admin
>> > > > > users) can do the operations like updating a patch?
>> > > > >
>> > > > > Regards,
>> > > > > Kai
>> > > > >
>> > > > > -Original Message-
>> > > > > From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
>> > > > > Sent: Saturday, May 14, 2016 9:38 AM
>> > > > > To: Zheng, Kai 

[jira] [Created] (HADOOP-13163) Reuse pre-computed filestatus in Distcp-CopyMapper

2016-05-16 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-13163:
-

 Summary: Reuse pre-computed filestatus in Distcp-CopyMapper
 Key: HADOOP-13163
 URL: https://issues.apache.org/jira/browse/HADOOP-13163
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rajesh Balamohan


https://github.com/apache/hadoop/blob/af942585a108d70e0946f6dd4c465a54d068eabf/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java#L185

targetStatus is already computed and it can be reused in checkUpdate() 
function. This wouldn't be a major issue in NN/HDFS, but in the case of S3 
getFileStatus calls can be expensive.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-05-16 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-13162:
-

 Summary: Consider reducing number of getFileStatus calls in 
S3AFileSystem.mkdirs
 Key: HADOOP-13162
 URL: https://issues.apache.org/jira/browse/HADOOP-13162
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Rajesh Balamohan






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-common-trunk-Java8 #1485

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[aw] HADOOP-13161. remove JDK7 from Dockerfile (aw)

--
[...truncated 5167 lines...]
Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.736 sec - in 
org.apache.hadoop.net.TestNetUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.762 sec - in 
org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Running org.apache.hadoop.net.TestTableMapping
Running org.apache.hadoop.net.TestScriptBasedMapping
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec - in 
org.apache.hadoop.net.TestTableMapping
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.674 sec - in 
org.apache.hadoop.net.TestScriptBasedMapping
Running org.apache.hadoop.net.unix.TestDomainSocketWatcher
Running org.apache.hadoop.net.unix.TestDomainSocket
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec - in 
org.apache.hadoop.net.unix.TestDomainSocketWatcher
Running org.apache.hadoop.net.TestSwitchMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.371 sec - in 
org.apache.hadoop.net.TestSocketIOWithTimeout
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.309 sec - in 
org.apache.hadoop.net.TestSwitchMapping
Running org.apache.hadoop.net.TestStaticMapping
Running org.apache.hadoop.cli.TestCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec - in 
org.apache.hadoop.cli.TestCLI
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.808 sec - in 
org.apache.hadoop.net.TestStaticMapping
Running org.apache.hadoop.io.TestSortedMapWritable
Running org.apache.hadoop.io.TestIOUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec - in 
org.apache.hadoop.io.TestSortedMapWritable
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.435 sec - in 
org.apache.hadoop.io.TestIOUtils
Running org.apache.hadoop.io.TestSequenceFile
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.871 sec - in 
org.apache.hadoop.net.unix.TestDomainSocket
Running org.apache.hadoop.io.TestEnumSetWritable
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.216 sec - in 
org.apache.hadoop.io.TestWritableName
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.572 sec - in 
org.apache.hadoop.io.TestEnumSetWritable
Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in 
org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Running org.apache.hadoop.io.TestSequenceFileAppend
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.499 sec - in 
org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.229 sec - in 
org.apache.hadoop.io.TestBytesWritable
Running org.apache.hadoop.io.TestSequenceFileSerialization
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.141 sec - in 
org.apache.hadoop.io.TestSequenceFileAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.533 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Running org.apache.hadoop.io.file.tfile.TestTFileComparators
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.214 sec - in 
org.apache.hadoop.io.TestSequenceFileSerialization
Running org.apache.hadoop.io.file.tfile.TestTFileSeek
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.106 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileComparators
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.409 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.399 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Running org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.962 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFile
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.295 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSeek
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.006 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.617 sec - in 
org.apache.hadoop.io.file.tfile.TestTFile
Running 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Running 

Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1484

2016-05-16 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-16 Thread Allen Wittenauer
OK, it looks like if someone commits HADOOP-13161, then trunk only uses 
JDK8 and branch-2 will use JDK7 and JDK8 during precommit with no changes 
required to Apache Yetus. :D


> On May 16, 2016, at 5:38 PM, Allen Wittenauer 
>  wrote:
> 
> 
> There’s a bunch of stuff that needs to happen at the Jenkins level:
> 
> * Kill off the JDK7 trunk builds for HADOOP, HDFS, MAPRED, YARN
> * Remove JDK7 from pre-commit for HADOOP, HDFS, MAPRED, YARN
> 
> One thing that needs to happen in the Apache Yetus project:
> * Wait until YETUS-369 has been written and committed to re-enable JDK7 for 
> pre-commit  (This effectively means that *ALL* JDK7 testing will *ONLY* be 
> happening in the regularly scheduled builds)
> 
> One thing that really should happen in the Apache Hadoop project:
> * Remove JDK7 from trunk Dockerfile
> 
> I’ll start banging on this stuff over the next few days.
> 
> 
>> On May 16, 2016, at 3:58 PM, Andrew Wang  wrote:
>> 
>> Very happy to announce that we've committed HADOOP-11858. I'm looking
>> forward to writing my first lambda in Java. I also attached a video to the
>> JIRA so we can all relive this moment in Hadoop development history.
>> 
>> It sounds like there's some precommit work to align test-patch with this
>> change. I'm hoping Allen will take point on this, but ping me if I can be
>> of any assistance.
>> 
>> On Thu, May 12, 2016 at 11:53 AM, Li Lu  wrote:
>> 
>>> I’d like to bring YARN-4977 into attention for using Java 8. HADOOP-13083
>>> does the maven change and in yarn-api there are ~5000 javadoc warnings.
>>> 
>>> Li Lu
>>> 
 On May 10, 2016, at 08:32, Akira AJISAKA 
>>> wrote:
 
 Hi developers,
 
 Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in trunk.
 Given this is a critical change, I'm thinking we should get the
>>> consensus first.
 
 One concern I think is, when the minimum version is set to JDK8, we need
>>> to configure Jenkins to disable multi JDK test only in trunk.
 
 Any thoughts?
 
 Thanks,
 Akira
 
 -
 To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
 For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
 
 
>>> 
>>> 
> 
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13161) remove JDK7 from Dockerfile

2016-05-16 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13161:
-

 Summary: remove JDK7 from Dockerfile
 Key: HADOOP-13161
 URL: https://issues.apache.org/jira/browse/HADOOP-13161
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0-alpha1
Reporter: Allen Wittenauer


We should slim down the Docker image by removing JDK7 now that trunk no longer 
supports it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12930) [Umbrella] Dynamic subcommands for hadoop shell scripts

2016-05-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12930.
---
Resolution: Fixed

Committed to trunk

> [Umbrella] Dynamic subcommands for hadoop shell scripts
> ---
>
> Key: HADOOP-12930
> URL: https://issues.apache.org/jira/browse/HADOOP-12930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12930.00.patch
>
>
> Umbrella for converting hadoop, hdfs, mapred, and yarn to allow for dynamic 
> subcommands. See first comment for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-16 Thread John Zhuge
Thanks Andrew. Looking forward to CompletableFuture and Streams. 

John Zhuge
Software Engineer, Cloudera

> On May 16, 2016, at 3:58 PM, Andrew Wang  wrote:
> 
> Very happy to announce that we've committed HADOOP-11858. I'm looking
> forward to writing my first lambda in Java. I also attached a video to the
> JIRA so we can all relive this moment in Hadoop development history.
> 
> It sounds like there's some precommit work to align test-patch with this
> change. I'm hoping Allen will take point on this, but ping me if I can be
> of any assistance.
> 
>> On Thu, May 12, 2016 at 11:53 AM, Li Lu  wrote:
>> 
>> I’d like to bring YARN-4977 into attention for using Java 8. HADOOP-13083
>> does the maven change and in yarn-api there are ~5000 javadoc warnings.
>> 
>> Li Lu
>> 
 On May 10, 2016, at 08:32, Akira AJISAKA 
>>> wrote:
>>> 
>>> Hi developers,
>>> 
>>> Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in trunk.
>>> Given this is a critical change, I'm thinking we should get the
>> consensus first.
>>> 
>>> One concern I think is, when the minimum version is set to JDK8, we need
>> to configure Jenkins to disable multi JDK test only in trunk.
>>> 
>>> Any thoughts?
>>> 
>>> Thanks,
>>> Akira
>>> 
>>> -
>>> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>> 
>> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[RESULT] Re: [VOTE] Merge feature branch HADOOP-12930

2016-05-16 Thread Allen Wittenauer
Vote passes.

1 = +1 non-binding
4 = +1 binding


I’ll squash and commit here in a sec.

Thanks everyone!
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Sangjin Lee
Thanks Junping! It seems to work now.

On Mon, May 16, 2016 at 5:22 PM, Junping Du  wrote:

> Someone fix the permission issue so that Administrator, committer and
> reporter can edit the issue now.
>
> Sangjin, it sounds like you were not in JIRA's committer list before and I
> just add you into committer roles for 4 projects. Hope it works for
> you now.​
>
>
> Thanks,
>
>
> Junping
> --
> *From:* sjl...@gmail.com  on behalf of Sangjin Lee <
> sj...@apache.org>
> *Sent:* Monday, May 16, 2016 11:43 PM
> *To:* Zhihai Xu
> *Cc:* Junping Du; Arun Suresh; Zheng, Kai; Andrew Wang;
> common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
>
> *Subject:* Re: Different JIRA permissions for HADOOP and HDFS
>
> I also find myself unable to edit most of the JIRA fields, and that is
> across projects (HADOOP, YARN, MAPREDUCE, and HDFS). Commenting and the
> workflow buttons seem to work, however.
>
> On Mon, May 16, 2016 at 8:14 AM, Zhihai Xu  wrote:
>
>> Great, Thanks Junping! Yes, the JIRA assignment works for me now.
>>
>> zhihai
>>
>> On Mon, May 16, 2016 at 5:29 AM, Junping Du  wrote:
>>
>> > Zhihai, I just set you with committer permissions on MAPREDUCE JIRA.
>> Would
>> > you try if the JIRA assignment works now? I cannot help on Hive
>> project. It
>> > is better to ask hive project community for help.
>> > For Arun's problem. from my check, the Edit permission on JIRA only
>> > authorized to Administrator only. I don't know if this setting is by
>> > intention but it was not like this previously.
>> > Can someone who make the change to clarify why we need this change or
>> > revert to whatever it used to be?
>> >
>> > Thanks,
>> >
>> > Junping
>> > 
>> > From: Arun Suresh 
>> > Sent: Monday, May 16, 2016 9:42 AM
>> > To: Zhihai Xu
>> > Cc: Zheng, Kai; Andrew Wang; common-dev@hadoop.apache.org;
>> > yarn-...@hadoop.apache.org
>> > Subject: Re: Different JIRA permissions for HADOOP and HDFS
>> >
>> > Not sure if this is related.. but It also looks like I am now no longer
>> > allowed to modify description and headline of JIRAs anymore..
>> > Would appreciate greatly if someone can help revert this !
>> >
>> > Cheers
>> > -Arun
>> >
>> > On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu  wrote:
>> >
>> > > Currently I also have permission issue to access the JIRA. I can't
>> assign
>> > > the JIRA(I created) to myself. For example,
>> > > https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
>> > > https://issues.apache.org/jira/browse/HIVE-13760. I can't find the
>> > button
>> > > to assign the JIRA to myself.
>> > > I don't have this issue two three weeks ago. Did anything change
>> > recently?
>> > > Can anyone help me solve this issue?
>> > >
>> > > thanks
>> > > zhihai
>> > >
>> > >
>> > >
>> > >
>> > > On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai 
>> > wrote:
>> > >
>> > > > It works for me now, thanks Andrew!
>> > > >
>> > > > Regards,
>> > > > Kai
>> > > >
>> > > > -Original Message-
>> > > > From: Andrew Wang [mailto:andrew.w...@cloudera.com]
>> > > > Sent: Monday, May 16, 2016 12:14 AM
>> > > > To: Zheng, Kai 
>> > > > Cc: common-dev@hadoop.apache.org
>> > > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
>> > > >
>> > > > I just gave you committer permissions on JIRA, try now?
>> > > >
>> > > > On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 
>> > > wrote:
>> > > >
>> > > > > I just ran into the bad situation that I committed HDFS-8449 but
>> > can't
>> > > > > resolve the issue due to lacking the required permission to me. Am
>> > not
>> > > > > sure if it's caused by my setup or environment change (temporally
>> > > > > working in a new time zone). Would anyone help resolve the issue
>> for
>> > > > > me to avoid bad state? Thanks!
>> > > > >
>> > > > > -Original Message-
>> > > > > From: Zheng, Kai [mailto:kai.zh...@intel.com]
>> > > > > Sent: Sunday, May 15, 2016 3:20 PM
>> > > > > To: Allen Wittenauer 
>> > > > > Cc: common-dev@hadoop.apache.org
>> > > > > Subject: RE: Different JIRA permissions for HADOOP and HDFS
>> > > > >
>> > > > > Thanks Allen for illustrating this in details. I understand. The
>> left
>> > > > > question is, is it intended only JIRA owner (not sure about admin
>> > > > > users) can do the operations like updating a patch?
>> > > > >
>> > > > > Regards,
>> > > > > Kai
>> > > > >
>> > > > > -Original Message-
>> > > > > From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
>> > > > > Sent: Saturday, May 14, 2016 9:38 AM
>> > > > > To: Zheng, Kai 
>> > > > > Cc: common-dev@hadoop.apache.org
>> > > > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
>> > > > >
>> > > > >
>> > > > > > On May 14, 2016, at 7:07 AM, Zheng, 

Re: 2.7.3 release plan

2016-05-16 Thread Vinod Kumar Vavilapalli
I am just waiting on HADOOP-12893.

HADOOP-13154 just got created in the last one day, will have to see if it 
really should block the release.

Major tickets are usually taken on a time basis: if they get in by the proposed 
timelines, we get them in. Otherwise, we move them over.

Thanks
+Vinod

> On May 16, 2016, at 5:20 PM, larry mccay  wrote:
> 
> Curious on the status of 2.7.3
> 
> It seems that we still have two outstanding critical/blocker JIRAs:
> 
>   1. [image: Bug] HADOOP-12893
>   Verify LICENSE.txt and NOTICE.txt
>   
>   2. [image: Sub-task] HADOOP-13154
>   S3AFileSystem
>   printAmazonServiceException/printAmazonClientException appear copy & paste
>   of AWS examples 
> 
> 
> But 45-ish when we include Majors as well.
> 
> I know there are a number of critical issues with fixes that need to go out.
> 
> What is the plan?
> 
> On Tue, Apr 12, 2016 at 2:09 PM, Vinod Kumar Vavilapalli > wrote:
> 
>> Others and I committed a few, I pushed out a few.
>> 
>> Down to just three now!
>> 
>> +Vinod
>> 
>>> On Apr 6, 2016, at 3:00 PM, Vinod Kumar Vavilapalli 
>> wrote:
>>> 
>>> Down to only 10 blocker / critical tickets (
>> https://issues.apache.org/jira/issues/?filter=12335343 <
>> https://issues.apache.org/jira/issues/?filter=12335343>) now!
>>> 
>>> Thanks
>>> +Vinod
>>> 
 On Mar 30, 2016, at 4:18 PM, Vinod Kumar Vavilapalli <
>> vino...@apache.org > wrote:
 
 Hi all,
 
 Got nudged about 2.7.3. Was previously waiting for 2.6.4 to go out
>> (which did go out mid February). Got a little busy since.
 
 Following up the 2.7.2 maintenance release, we should work towards a
>> 2.7.3. The focus obviously is to have blocker issues [1], bug-fixes and
>> *no* features / improvements.
 
 I hope to cut an RC in a week - giving enough time for outstanding
>> blocker / critical issues. Will start moving out any tickets that are not
>> blockers and/or won’t fit the timeline - there are 3 blockers and 15
>> critical tickets outstanding as of now.
 
 Thanks,
 +Vinod
 
 [1] 2.7.3 release blockers:
>> https://issues.apache.org/jira/issues/?filter=12335343 <
>> https://issues.apache.org/jira/issues/?filter=12335343>
>>> 
>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-16 Thread Allen Wittenauer

There’s a bunch of stuff that needs to happen at the Jenkins level:

* Kill off the JDK7 trunk builds for HADOOP, HDFS, MAPRED, YARN
* Remove JDK7 from pre-commit for HADOOP, HDFS, MAPRED, YARN

One thing that needs to happen in the Apache Yetus project:
* Wait until YETUS-369 has been written and committed to re-enable JDK7 for 
pre-commit  (This effectively means that *ALL* JDK7 testing will *ONLY* be 
happening in the regularly scheduled builds)

One thing that really should happen in the Apache Hadoop project:
* Remove JDK7 from trunk Dockerfile

I’ll start banging on this stuff over the next few days.


> On May 16, 2016, at 3:58 PM, Andrew Wang  wrote:
> 
> Very happy to announce that we've committed HADOOP-11858. I'm looking
> forward to writing my first lambda in Java. I also attached a video to the
> JIRA so we can all relive this moment in Hadoop development history.
> 
> It sounds like there's some precommit work to align test-patch with this
> change. I'm hoping Allen will take point on this, but ping me if I can be
> of any assistance.
> 
> On Thu, May 12, 2016 at 11:53 AM, Li Lu  wrote:
> 
>> I’d like to bring YARN-4977 into attention for using Java 8. HADOOP-13083
>> does the maven change and in yarn-api there are ~5000 javadoc warnings.
>> 
>> Li Lu
>> 
>>> On May 10, 2016, at 08:32, Akira AJISAKA 
>> wrote:
>>> 
>>> Hi developers,
>>> 
>>> Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in trunk.
>>> Given this is a critical change, I'm thinking we should get the
>> consensus first.
>>> 
>>> One concern I think is, when the minimum version is set to JDK8, we need
>> to configure Jenkins to disable multi JDK test only in trunk.
>>> 
>>> Any thoughts?
>>> 
>>> Thanks,
>>> Akira
>>> 
>>> -
>>> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>>> 
>>> 
>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Junping Du
Someone fix the permission issue so that Administrator, committer and reporter 
can edit the issue now.

Sangjin, it sounds like you were not in JIRA's committer list before and I just 
add you into committer roles for 4 projects. Hope it works for you now.​


Thanks,


Junping


From: sjl...@gmail.com  on behalf of Sangjin Lee 

Sent: Monday, May 16, 2016 11:43 PM
To: Zhihai Xu
Cc: Junping Du; Arun Suresh; Zheng, Kai; Andrew Wang; 
common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

I also find myself unable to edit most of the JIRA fields, and that is across 
projects (HADOOP, YARN, MAPREDUCE, and HDFS). Commenting and the workflow 
buttons seem to work, however.

On Mon, May 16, 2016 at 8:14 AM, Zhihai Xu 
> wrote:
Great, Thanks Junping! Yes, the JIRA assignment works for me now.

zhihai

On Mon, May 16, 2016 at 5:29 AM, Junping Du 
> wrote:

> Zhihai, I just set you with committer permissions on MAPREDUCE JIRA. Would
> you try if the JIRA assignment works now? I cannot help on Hive project. It
> is better to ask hive project community for help.
> For Arun's problem. from my check, the Edit permission on JIRA only
> authorized to Administrator only. I don't know if this setting is by
> intention but it was not like this previously.
> Can someone who make the change to clarify why we need this change or
> revert to whatever it used to be?
>
> Thanks,
>
> Junping
> 
> From: Arun Suresh >
> Sent: Monday, May 16, 2016 9:42 AM
> To: Zhihai Xu
> Cc: Zheng, Kai; Andrew Wang; 
> common-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org
> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>
> Not sure if this is related.. but It also looks like I am now no longer
> allowed to modify description and headline of JIRAs anymore..
> Would appreciate greatly if someone can help revert this !
>
> Cheers
> -Arun
>
> On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu 
> > wrote:
>
> > Currently I also have permission issue to access the JIRA. I can't assign
> > the JIRA(I created) to myself. For example,
> > https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
> > https://issues.apache.org/jira/browse/HIVE-13760. I can't find the
> button
> > to assign the JIRA to myself.
> > I don't have this issue two three weeks ago. Did anything change
> recently?
> > Can anyone help me solve this issue?
> >
> > thanks
> > zhihai
> >
> >
> >
> >
> > On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai 
> > >
> wrote:
> >
> > > It works for me now, thanks Andrew!
> > >
> > > Regards,
> > > Kai
> > >
> > > -Original Message-
> > > From: Andrew Wang 
> > > [mailto:andrew.w...@cloudera.com]
> > > Sent: Monday, May 16, 2016 12:14 AM
> > > To: Zheng, Kai >
> > > Cc: common-dev@hadoop.apache.org
> > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> > >
> > > I just gave you committer permissions on JIRA, try now?
> > >
> > > On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 
> > > >
> > wrote:
> > >
> > > > I just ran into the bad situation that I committed HDFS-8449 but
> can't
> > > > resolve the issue due to lacking the required permission to me. Am
> not
> > > > sure if it's caused by my setup or environment change (temporally
> > > > working in a new time zone). Would anyone help resolve the issue for
> > > > me to avoid bad state? Thanks!
> > > >
> > > > -Original Message-
> > > > From: Zheng, Kai 
> > > > [mailto:kai.zh...@intel.com]
> > > > Sent: Sunday, May 15, 2016 3:20 PM
> > > > To: Allen Wittenauer 
> > > > Cc: common-dev@hadoop.apache.org
> > > > Subject: RE: Different JIRA permissions for HADOOP and HDFS
> > > >
> > > > Thanks Allen for illustrating this in details. I understand. The left
> > > > question is, is it intended only JIRA owner (not sure about admin
> > > > users) can do the operations like updating a patch?
> > > >
> > > > Regards,
> > > > Kai
> > > >
> > > > -Original Message-
> > > > From: Allen Wittenauer 
> > > > [mailto:allenwittena...@yahoo.com.INVALID]
> > > > Sent: Saturday, May 14, 2016 9:38 AM
> > > > To: Zheng, Kai >
> > > > Cc: common-dev@hadoop.apache.org
> > > > Subject: Re: Different JIRA permissions for HADOOP and HDFS

Re: 2.7.3 release plan

2016-05-16 Thread larry mccay
Curious on the status of 2.7.3

It seems that we still have two outstanding critical/blocker JIRAs:

   1. [image: Bug] HADOOP-12893
   Verify LICENSE.txt and NOTICE.txt
   
   2. [image: Sub-task] HADOOP-13154
   S3AFileSystem
   printAmazonServiceException/printAmazonClientException appear copy & paste
   of AWS examples 


But 45-ish when we include Majors as well.

I know there are a number of critical issues with fixes that need to go out.

What is the plan?

On Tue, Apr 12, 2016 at 2:09 PM, Vinod Kumar Vavilapalli  wrote:

> Others and I committed a few, I pushed out a few.
>
> Down to just three now!
>
> +Vinod
>
> > On Apr 6, 2016, at 3:00 PM, Vinod Kumar Vavilapalli 
> wrote:
> >
> > Down to only 10 blocker / critical tickets (
> https://issues.apache.org/jira/issues/?filter=12335343 <
> https://issues.apache.org/jira/issues/?filter=12335343>) now!
> >
> > Thanks
> > +Vinod
> >
> >> On Mar 30, 2016, at 4:18 PM, Vinod Kumar Vavilapalli <
> vino...@apache.org > wrote:
> >>
> >> Hi all,
> >>
> >> Got nudged about 2.7.3. Was previously waiting for 2.6.4 to go out
> (which did go out mid February). Got a little busy since.
> >>
> >> Following up the 2.7.2 maintenance release, we should work towards a
> 2.7.3. The focus obviously is to have blocker issues [1], bug-fixes and
> *no* features / improvements.
> >>
> >> I hope to cut an RC in a week - giving enough time for outstanding
> blocker / critical issues. Will start moving out any tickets that are not
> blockers and/or won’t fit the timeline - there are 3 blockers and 15
> critical tickets outstanding as of now.
> >>
> >> Thanks,
> >> +Vinod
> >>
> >> [1] 2.7.3 release blockers:
> https://issues.apache.org/jira/issues/?filter=12335343 <
> https://issues.apache.org/jira/issues/?filter=12335343>
> >
>
>


Build failed in Jenkins: Hadoop-Common-trunk #2771

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[lei] HDFS-10410. RedundantEditLogInputStream.LOG is set to wrong class. (John

--
[...truncated 31 lines...]
[INFO] Apache Hadoop Maven Plugins
[INFO] Apache Hadoop MiniKDC
[INFO] Apache Hadoop Auth
[INFO] Apache Hadoop Auth Examples
[INFO] Apache Hadoop Common
[INFO] Apache Hadoop NFS
[INFO] Apache Hadoop KMS
[INFO] Apache Hadoop Common Project
[INFO] Apache Hadoop HDFS Client
[INFO] Apache Hadoop HDFS
[INFO] Apache Hadoop HDFS Native Client
[INFO] Apache Hadoop HttpFS
[INFO] Apache Hadoop HDFS BookKeeper Journal
[INFO] Apache Hadoop HDFS-NFS
[INFO] Apache Hadoop HDFS Project
[INFO] Apache Hadoop YARN
[INFO] Apache Hadoop YARN API
[INFO] Apache Hadoop YARN Common
[INFO] Apache Hadoop YARN Server
[INFO] Apache Hadoop YARN Server Common
[INFO] Apache Hadoop YARN NodeManager
[INFO] Apache Hadoop YARN Web Proxy
[INFO] Apache Hadoop YARN ApplicationHistoryService
[INFO] Apache Hadoop YARN ResourceManager
[INFO] Apache Hadoop YARN Server Tests
[INFO] Apache Hadoop YARN Client
[INFO] Apache Hadoop YARN SharedCacheManager
[INFO] Apache Hadoop YARN Timeline Plugin Storage
[INFO] Apache Hadoop YARN Applications
[INFO] Apache Hadoop YARN DistributedShell
[INFO] Apache Hadoop YARN Unmanaged Am Launcher
[INFO] Apache Hadoop YARN Site
[INFO] Apache Hadoop YARN Registry
[INFO] Apache Hadoop YARN Project
[INFO] Apache Hadoop MapReduce Client
[INFO] Apache Hadoop MapReduce Core
[INFO] Apache Hadoop MapReduce Common
[INFO] Apache Hadoop MapReduce Shuffle
[INFO] Apache Hadoop MapReduce App
[INFO] Apache Hadoop MapReduce HistoryServer
[INFO] Apache Hadoop MapReduce JobClient
[INFO] Apache Hadoop MapReduce HistoryServer Plugins
[INFO] Apache Hadoop MapReduce NativeTask
[INFO] Apache Hadoop MapReduce Examples
[INFO] Apache Hadoop MapReduce
[INFO] Apache Hadoop MapReduce Streaming
[INFO] Apache Hadoop Distributed Copy
[INFO] Apache Hadoop Archives
[INFO] Apache Hadoop Archive Logs
[INFO] Apache Hadoop Rumen
[INFO] Apache Hadoop Gridmix
[INFO] Apache Hadoop Data Join
[INFO] Apache Hadoop Ant Tasks
[INFO] Apache Hadoop Extras
[INFO] Apache Hadoop Pipes
[INFO] Apache Hadoop OpenStack support
[INFO] Apache Hadoop Amazon Web Services support
[INFO] Apache Hadoop Azure support
[INFO] Apache Hadoop Client
[INFO] Apache Hadoop Mini-Cluster
[INFO] Apache Hadoop Scheduler Load Simulator
[INFO] Apache Hadoop Tools Dist
[INFO] Apache Hadoop Kafka Library support
[INFO] Apache Hadoop Tools
[INFO] Apache Hadoop Distribution
[INFO] 
[INFO] Using the builder 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder
 with a thread count of 1
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Main 3.0.0-alpha1-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (clean) @ hadoop-main ---
[WARNING] Rule 1: org.apache.maven.plugins.enforcer.RequireJavaVersion failed 
with message:
Detected JDK Version: 1.7.0-55 is not in the allowed range [1.8,).
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main  FAILURE [  0.267 s]
[INFO] Apache Hadoop Build Tools . SKIPPED
[INFO] Apache Hadoop Project POM . SKIPPED
[INFO] Apache Hadoop Annotations . SKIPPED
[INFO] Apache Hadoop Project Dist POM  SKIPPED
[INFO] Apache Hadoop Assemblies .. SKIPPED
[INFO] Apache Hadoop Maven Plugins ... SKIPPED
[INFO] Apache Hadoop MiniKDC . SKIPPED
[INFO] Apache Hadoop Auth  SKIPPED
[INFO] Apache Hadoop Auth Examples ... SKIPPED
[INFO] Apache Hadoop Common .. SKIPPED
[INFO] Apache Hadoop NFS . SKIPPED
[INFO] Apache Hadoop KMS . SKIPPED
[INFO] Apache Hadoop Common Project .. SKIPPED
[INFO] Apache Hadoop HDFS Client . SKIPPED
[INFO] Apache Hadoop HDFS  SKIPPED
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] Apache Hadoop YARN  SKIPPED
[INFO] Apache Hadoop YARN API  SKIPPED
[INFO] Apache Hadoop YARN Common . SKIPPED
[INFO] Apache Hadoop YARN Server 

Build failed in Jenkins: Hadoop-Common-trunk #2770

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[jing9] HADOOP-13146. Refactor RetryInvocationHandler. Contributed by Tsz Wo

[wang] HADOOP-11858. [JDK8] Set minimum version of Hadoop 3 to JDK 8.

--
[...truncated 23 lines...]
[INFO] Reactor Build Order:
[INFO] 
[INFO] Apache Hadoop Main
[INFO] Apache Hadoop Build Tools
[INFO] Apache Hadoop Project POM
[INFO] Apache Hadoop Annotations
[INFO] Apache Hadoop Project Dist POM
[INFO] Apache Hadoop Assemblies
[INFO] Apache Hadoop Maven Plugins
[INFO] Apache Hadoop MiniKDC
[INFO] Apache Hadoop Auth
[INFO] Apache Hadoop Auth Examples
[INFO] Apache Hadoop Common
[INFO] Apache Hadoop NFS
[INFO] Apache Hadoop KMS
[INFO] Apache Hadoop Common Project
[INFO] Apache Hadoop HDFS Client
[INFO] Apache Hadoop HDFS
[INFO] Apache Hadoop HDFS Native Client
[INFO] Apache Hadoop HttpFS
[INFO] Apache Hadoop HDFS BookKeeper Journal
[INFO] Apache Hadoop HDFS-NFS
[INFO] Apache Hadoop HDFS Project
[INFO] Apache Hadoop YARN
[INFO] Apache Hadoop YARN API
[INFO] Apache Hadoop YARN Common
[INFO] Apache Hadoop YARN Server
[INFO] Apache Hadoop YARN Server Common
[INFO] Apache Hadoop YARN NodeManager
[INFO] Apache Hadoop YARN Web Proxy
[INFO] Apache Hadoop YARN ApplicationHistoryService
[INFO] Apache Hadoop YARN ResourceManager
[INFO] Apache Hadoop YARN Server Tests
[INFO] Apache Hadoop YARN Client
[INFO] Apache Hadoop YARN SharedCacheManager
[INFO] Apache Hadoop YARN Timeline Plugin Storage
[INFO] Apache Hadoop YARN Applications
[INFO] Apache Hadoop YARN DistributedShell
[INFO] Apache Hadoop YARN Unmanaged Am Launcher
[INFO] Apache Hadoop YARN Site
[INFO] Apache Hadoop YARN Registry
[INFO] Apache Hadoop YARN Project
[INFO] Apache Hadoop MapReduce Client
[INFO] Apache Hadoop MapReduce Core
[INFO] Apache Hadoop MapReduce Common
[INFO] Apache Hadoop MapReduce Shuffle
[INFO] Apache Hadoop MapReduce App
[INFO] Apache Hadoop MapReduce HistoryServer
[INFO] Apache Hadoop MapReduce JobClient
[INFO] Apache Hadoop MapReduce HistoryServer Plugins
[INFO] Apache Hadoop MapReduce NativeTask
[INFO] Apache Hadoop MapReduce Examples
[INFO] Apache Hadoop MapReduce
[INFO] Apache Hadoop MapReduce Streaming
[INFO] Apache Hadoop Distributed Copy
[INFO] Apache Hadoop Archives
[INFO] Apache Hadoop Archive Logs
[INFO] Apache Hadoop Rumen
[INFO] Apache Hadoop Gridmix
[INFO] Apache Hadoop Data Join
[INFO] Apache Hadoop Ant Tasks
[INFO] Apache Hadoop Extras
[INFO] Apache Hadoop Pipes
[INFO] Apache Hadoop OpenStack support
[INFO] Apache Hadoop Amazon Web Services support
[INFO] Apache Hadoop Azure support
[INFO] Apache Hadoop Client
[INFO] Apache Hadoop Mini-Cluster
[INFO] Apache Hadoop Scheduler Load Simulator
[INFO] Apache Hadoop Tools Dist
[INFO] Apache Hadoop Kafka Library support
[INFO] Apache Hadoop Tools
[INFO] Apache Hadoop Distribution
[INFO] 
[INFO] Using the builder 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder
 with a thread count of 1
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Main 3.0.0-alpha1-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (clean) @ hadoop-main ---
[WARNING] Rule 1: org.apache.maven.plugins.enforcer.RequireJavaVersion failed 
with message:
Detected JDK Version: 1.7.0-55 is not in the allowed range [1.8,).
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main  FAILURE [  0.239 s]
[INFO] Apache Hadoop Build Tools . SKIPPED
[INFO] Apache Hadoop Project POM . SKIPPED
[INFO] Apache Hadoop Annotations . SKIPPED
[INFO] Apache Hadoop Project Dist POM  SKIPPED
[INFO] Apache Hadoop Assemblies .. SKIPPED
[INFO] Apache Hadoop Maven Plugins ... SKIPPED
[INFO] Apache Hadoop MiniKDC . SKIPPED
[INFO] Apache Hadoop Auth  SKIPPED
[INFO] Apache Hadoop Auth Examples ... SKIPPED
[INFO] Apache Hadoop Common .. SKIPPED
[INFO] Apache Hadoop NFS . SKIPPED
[INFO] Apache Hadoop KMS . SKIPPED
[INFO] Apache Hadoop Common Project .. SKIPPED
[INFO] Apache Hadoop HDFS Client . SKIPPED
[INFO] Apache Hadoop HDFS  SKIPPED
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED

Build failed in Jenkins: Hadoop-common-trunk-Java8 #1482

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[jing9] HADOOP-13146. Refactor RetryInvocationHandler. Contributed by Tsz Wo

[wang] HADOOP-11858. [JDK8] Set minimum version of Hadoop 3 to JDK 8.

--
[...truncated 5167 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.432 sec - in 
org.apache.hadoop.fs.TestFileContext
Tests run: 17, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 1.928 sec - in 
org.apache.hadoop.fs.TestFsShellCopy
Running org.apache.hadoop.fs.TestDelegationTokenRenewer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.893 sec - in 
org.apache.hadoop.fs.TestHarFileSystemBasics
Running org.apache.hadoop.fs.TestFileSystemCaching
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.014 sec - in 
org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.461 sec - in 
org.apache.hadoop.fs.TestFileSystemCaching
Running org.apache.hadoop.fs.TestLocalFsFCStatistics
Running org.apache.hadoop.fs.TestTruncatedInputBug
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.934 sec - in 
org.apache.hadoop.fs.TestTruncatedInputBug
Tests run: 63, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 4.355 sec - in 
org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
Running org.apache.hadoop.fs.TestFsShell
Running org.apache.hadoop.fs.TestDU
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.647 sec - in 
org.apache.hadoop.fs.TestLocalFsFCStatistics
Running org.apache.hadoop.fs.TestLocalDirAllocator
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.966 sec - in 
org.apache.hadoop.fs.TestFsShell
Running org.apache.hadoop.fs.viewfs.TestChRootedFileSystem
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.156 sec - in 
org.apache.hadoop.fs.TestLocalDirAllocator
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegationTokenSupport
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.99 sec - in 
org.apache.hadoop.fs.TestDelegationTokenRenewer
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.544 sec - in 
org.apache.hadoop.fs.viewfs.TestChRootedFileSystem
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.04 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegationTokenSupport
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegation
Running org.apache.hadoop.fs.viewfs.TestViewFsConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.864 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.5 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegation
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem
Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.19 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.459 sec - in 
org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
Running org.apache.hadoop.fs.viewfs.TestViewFsURIs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.146 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsURIs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.836 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem
Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.095 sec - in 
org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
Running 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.583 sec - in 
org.apache.hadoop.fs.TestDU
Running org.apache.hadoop.fs.viewfs.TestViewfsFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.209 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsTrash
Running org.apache.hadoop.fs.viewfs.TestFcCreateMkdirLocalFs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.462 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.933 sec - in 
org.apache.hadoop.fs.viewfs.TestViewfsFileStatus
Running org.apache.hadoop.fs.viewfs.TestChRootedFs
Running org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.856 sec - in 
org.apache.hadoop.fs.viewfs.TestFcCreateMkdirLocalFs
Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Tests run: 60, Failures: 0, Errors: 0, Skipped: 

[jira] [Resolved] (HADOOP-11858) [JDK8] Set minimum version of Hadoop 3 to JDK 8

2016-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-11858.
--
  Resolution: Fixed
   Fix Version/s: 3.0.0-alpha1
Target Version/s:   (was: )

Committed to trunk. Thanks Robert for the patch and everyone for reviewing!

> [JDK8] Set minimum version of Hadoop 3 to JDK 8
> ---
>
> Key: HADOOP-11858
> URL: https://issues.apache.org/jira/browse/HADOOP-11858
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11858.001.patch, HADOOP-11858.002.patch, 
> HADOOP-11858.003.patch
>
>
> Set minimum version of trunk to JDK 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Sangjin Lee
I also find myself unable to edit most of the JIRA fields, and that is
across projects (HADOOP, YARN, MAPREDUCE, and HDFS). Commenting and the
workflow buttons seem to work, however.

On Mon, May 16, 2016 at 8:14 AM, Zhihai Xu  wrote:

> Great, Thanks Junping! Yes, the JIRA assignment works for me now.
>
> zhihai
>
> On Mon, May 16, 2016 at 5:29 AM, Junping Du  wrote:
>
> > Zhihai, I just set you with committer permissions on MAPREDUCE JIRA.
> Would
> > you try if the JIRA assignment works now? I cannot help on Hive project.
> It
> > is better to ask hive project community for help.
> > For Arun's problem. from my check, the Edit permission on JIRA only
> > authorized to Administrator only. I don't know if this setting is by
> > intention but it was not like this previously.
> > Can someone who make the change to clarify why we need this change or
> > revert to whatever it used to be?
> >
> > Thanks,
> >
> > Junping
> > 
> > From: Arun Suresh 
> > Sent: Monday, May 16, 2016 9:42 AM
> > To: Zhihai Xu
> > Cc: Zheng, Kai; Andrew Wang; common-dev@hadoop.apache.org;
> > yarn-...@hadoop.apache.org
> > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> >
> > Not sure if this is related.. but It also looks like I am now no longer
> > allowed to modify description and headline of JIRAs anymore..
> > Would appreciate greatly if someone can help revert this !
> >
> > Cheers
> > -Arun
> >
> > On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu  wrote:
> >
> > > Currently I also have permission issue to access the JIRA. I can't
> assign
> > > the JIRA(I created) to myself. For example,
> > > https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
> > > https://issues.apache.org/jira/browse/HIVE-13760. I can't find the
> > button
> > > to assign the JIRA to myself.
> > > I don't have this issue two three weeks ago. Did anything change
> > recently?
> > > Can anyone help me solve this issue?
> > >
> > > thanks
> > > zhihai
> > >
> > >
> > >
> > >
> > > On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai 
> > wrote:
> > >
> > > > It works for me now, thanks Andrew!
> > > >
> > > > Regards,
> > > > Kai
> > > >
> > > > -Original Message-
> > > > From: Andrew Wang [mailto:andrew.w...@cloudera.com]
> > > > Sent: Monday, May 16, 2016 12:14 AM
> > > > To: Zheng, Kai 
> > > > Cc: common-dev@hadoop.apache.org
> > > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> > > >
> > > > I just gave you committer permissions on JIRA, try now?
> > > >
> > > > On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 
> > > wrote:
> > > >
> > > > > I just ran into the bad situation that I committed HDFS-8449 but
> > can't
> > > > > resolve the issue due to lacking the required permission to me. Am
> > not
> > > > > sure if it's caused by my setup or environment change (temporally
> > > > > working in a new time zone). Would anyone help resolve the issue
> for
> > > > > me to avoid bad state? Thanks!
> > > > >
> > > > > -Original Message-
> > > > > From: Zheng, Kai [mailto:kai.zh...@intel.com]
> > > > > Sent: Sunday, May 15, 2016 3:20 PM
> > > > > To: Allen Wittenauer 
> > > > > Cc: common-dev@hadoop.apache.org
> > > > > Subject: RE: Different JIRA permissions for HADOOP and HDFS
> > > > >
> > > > > Thanks Allen for illustrating this in details. I understand. The
> left
> > > > > question is, is it intended only JIRA owner (not sure about admin
> > > > > users) can do the operations like updating a patch?
> > > > >
> > > > > Regards,
> > > > > Kai
> > > > >
> > > > > -Original Message-
> > > > > From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
> > > > > Sent: Saturday, May 14, 2016 9:38 AM
> > > > > To: Zheng, Kai 
> > > > > Cc: common-dev@hadoop.apache.org
> > > > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> > > > >
> > > > >
> > > > > > On May 14, 2016, at 7:07 AM, Zheng, Kai 
> > wrote:
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Noticed this difference but not sure if it’s intended. YARN is
> > > > > > similar
> > > > > with HDFS. It’s not convenient. Any clarifying?
> > > > >
> > > > >
> > > > > Under JIRA, different projects (e.g., HADOOP, YARN,
> > MAPREDUCE,
> > > > > HDFS, YETUS, HBASE, ACCUMULO, etc) may have different settings.  At
> > > > > one point in time, all of the Hadoop subprojects were under one
> JIRA
> > > > > project (HADOOP). But then a bunch of folks decided they didn’t
> want
> > > > > to see the other sub projects issues so they split them up…. and
> thus
> > > > > setting the stage for duplicate code and operational divergence in
> > the
> > > > source.
> > > > >
> > > > > Since people don’t realize or care that they are separate,
> > > > > people will file INFRA tickets or whatever 

Build failed in Jenkins: Hadoop-Common-trunk #2769

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[epayne] YARN-5069. TestFifoScheduler.testResourceOverCommit race condition.

--
[...truncated 5161 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.777 sec - in 
org.apache.hadoop.fs.TestTruncatedInputBug
Running org.apache.hadoop.fs.TestFsShell
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.76 sec - in 
org.apache.hadoop.fs.TestFileSystemInitialization
Running org.apache.hadoop.fs.TestFileContextResolveAfs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.975 sec - in 
org.apache.hadoop.fs.TestFsShell
Running org.apache.hadoop.fs.TestBlockLocation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.084 sec - in 
org.apache.hadoop.fs.TestBlockLocation
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.875 sec - in 
org.apache.hadoop.fs.TestFileContextResolveAfs
Running org.apache.hadoop.fs.TestFsShellCopy
Running org.apache.hadoop.fs.permission.TestAcl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in 
org.apache.hadoop.fs.permission.TestAcl
Running org.apache.hadoop.fs.permission.TestFsPermission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.649 sec - in 
org.apache.hadoop.fs.permission.TestFsPermission
Tests run: 17, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 1.545 sec - in 
org.apache.hadoop.fs.TestFsShellCopy
Running org.apache.hadoop.fs.sftp.TestSFTPFileSystem
Running org.apache.hadoop.fs.TestFileContextDeleteOnExit
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.783 sec - in 
org.apache.hadoop.fs.TestDelegationTokenRenewer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.791 sec - in 
org.apache.hadoop.fs.TestFileContextDeleteOnExit
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Running org.apache.hadoop.fs.TestStat
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.834 sec - in 
org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.914 sec - in 
org.apache.hadoop.fs.TestStat
Running org.apache.hadoop.fs.shell.TestCopy
Running org.apache.hadoop.fs.shell.TestTextCommand
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.025 sec - in 
org.apache.hadoop.fs.shell.TestCopy
Running org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.204 sec - in 
org.apache.hadoop.fs.shell.TestTextCommand
Running org.apache.hadoop.fs.shell.TestAclCommands
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.025 sec - in 
org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Running org.apache.hadoop.fs.shell.find.TestFind
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.025 sec - in 
org.apache.hadoop.fs.shell.TestAclCommands
Running org.apache.hadoop.fs.shell.find.TestPrint0
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.124 sec - in 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.11 sec - in 
org.apache.hadoop.fs.shell.find.TestPrint0
Running org.apache.hadoop.fs.shell.find.TestPrint
Running org.apache.hadoop.fs.shell.find.TestAnd
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.045 sec - in 
org.apache.hadoop.fs.shell.find.TestFind
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.408 sec - in 
org.apache.hadoop.fs.shell.find.TestAnd
Running org.apache.hadoop.fs.shell.find.TestResult
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.122 sec - in 
org.apache.hadoop.fs.shell.find.TestResult
Running org.apache.hadoop.fs.shell.find.TestIname
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.056 sec - in 
org.apache.hadoop.fs.shell.find.TestPrint
Running org.apache.hadoop.fs.shell.find.TestName
Running org.apache.hadoop.fs.shell.find.TestFilterExpression
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.449 sec - in 
org.apache.hadoop.fs.shell.find.TestFilterExpression
Running org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.239 sec - in 
org.apache.hadoop.fs.shell.find.TestIname
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec - in 
org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.248 sec - in 
org.apache.hadoop.fs.shell.find.TestName
Running org.apache.hadoop.fs.shell.TestCommandFactory
Running org.apache.hadoop.fs.shell.TestLs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.2 sec - in 
org.apache.hadoop.fs.shell.TestCommandFactory
Running org.apache.hadoop.fs.shell.TestMove
Running org.apache.hadoop.fs.shell.TestXAttrCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.698 sec - in 

Re: checkstyle and package-info

2016-05-16 Thread Bokor Andras
If there is no other volunteer until tomorrow morning CET we wil do a try.
John Zhuge  írta:
>Thanks Andras.
>
>Created a new https://issues.apache.org/jira/browse/HADOOP-13160 and
>uploaded a patch.
>
>Tested on Ubuntu but I dont have access to a Windows environment. Could
>someone kindly test the patch in Windows?
>
>Thanks,
>
>John Zhuge
>Software Engineer, Cloudera
>
>On Mon, May 16, 2016 at 11:15 AM, Bokor Andras  wrote:
>
>> In Haddop JUnit tests not always follow Test.*.java pattern. Instead I
>> suggest the following suppress:
>> <*suppress **checks=**"JavadocPackage" **files=**"/src/test/.*"*/>
>>
>> This needs to be tested on Windows due to the slashes.
>>
>> John Zhuge  írta:
>>
>>
>> http://stackoverflow.com/questions/5871020/different-checkstyle-rules-for-main-and-test-in-maven
>>
>> http://stackoverflow.com/questions/25894431/checkstyle-different-rules-for-different-files
>>
>> So something like this for Hadoop:
>>
>> 
>>
>> I will test it this approach.
>>
>>
>> John Zhuge
>> Software Engineer, Cloudera
>>
>> On Mon, May 16, 2016 at 9:39 AM, Chris Nauroth 
>> wrote:
>>
>> > Im in favor of disabling this, but Im not sure how.  It looks 
>> > like
>> > HADOOP-12701 recently enabled Checkstyle in src/test, so we hadnt seen
>> > this before.  Unfortunately, I cant find a way to keep Checkstyle
>> > generally on for src/test, but with different rules from src/main.
>> >
>> > http://checkstyle.sourceforge.net/config_javadoc.html
>> >
>> >
>> > Does anyone else have an idea?
>> >
>> >
>> > --Chris Nauroth
>> >
>> >
>> >
>> >
>> > On 5/16/16, 6:34 AM, "Steve Loughran"  wrote:
>> >
>> > >
>> > >Ive got checkstyle rejecting a patch as theres no 
>> > >package-info.java
>> > >file in src/test
>> > >
>> > >https://issues.apache.org/jira/browse/HADOOP-13130
>> > >
>> > >Im happy to argue the merits of the package-info files in production
>> > >code: they can be good, if people put in the effort to write and
>> > >maintain, but not for tests. Can we get this turned off?
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >
>> >
>>
>


Build failed in Jenkins: Hadoop-common-trunk-Java8 #1481

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[epayne] YARN-5069. TestFifoScheduler.testResourceOverCommit race condition.

--
[...truncated 5581 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.016 sec - in 
org.apache.hadoop.fs.TestFcLocalFsPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestAfsCheckPath
Running org.apache.hadoop.fs.TestFileUtil
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec - in 
org.apache.hadoop.fs.TestAfsCheckPath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.746 sec - in 
org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestGlobPattern
Running org.apache.hadoop.fs.TestDFVariations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec - in 
org.apache.hadoop.fs.TestGlobPattern
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.295 sec - in 
org.apache.hadoop.fs.TestDFVariations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestBlockLocation
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.92 sec - in 
org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec - in 
org.apache.hadoop.fs.TestBlockLocation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestCopy
Running org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Running org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.108 sec - in 
org.apache.hadoop.fs.shell.TestPathExceptions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestAclCommands
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.951 sec - in 
org.apache.hadoop.fs.shell.TestCopy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.919 sec - in 
org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestLs
Running org.apache.hadoop.fs.shell.TestTextCommand
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.83 sec - in 
org.apache.hadoop.fs.shell.TestAclCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestMove
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.981 sec - in 
org.apache.hadoop.fs.shell.TestTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.325 sec - in 
org.apache.hadoop.fs.shell.TestLs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.895 sec - in 
org.apache.hadoop.fs.shell.TestMove
Running org.apache.hadoop.fs.shell.TestPathData
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestCommandFactory
Running org.apache.hadoop.fs.shell.TestCount
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec - in 
org.apache.hadoop.fs.shell.TestCommandFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestXAttrCommands
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.914 sec - in 
org.apache.hadoop.fs.shell.TestPathData
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.531 sec - in 
org.apache.hadoop.fs.shell.TestXAttrCommands
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.879 sec - in 
org.apache.hadoop.fs.shell.TestCount

Re: checkstyle and package-info

2016-05-16 Thread John Zhuge
Thanks Andras.

Created a new https://issues.apache.org/jira/browse/HADOOP-13160 and
uploaded a patch.

Tested on Ubuntu but I don't have access to a Windows environment. Could
someone kindly test the patch in Windows?

Thanks,

John Zhuge
Software Engineer, Cloudera

On Mon, May 16, 2016 at 11:15 AM, Bokor Andras  wrote:

> In Haddop JUnit tests not always follow Test.*.java pattern. Instead I
> suggest the following suppress:
> <*suppress **checks=**"JavadocPackage" **files=**"/src/test/.*"*/>
>
> This needs to be tested on Windows due to the slashes.
>
> John Zhuge  írta:
>
>
> http://stackoverflow.com/questions/5871020/different-checkstyle-rules-for-main-and-test-in-maven
>
> http://stackoverflow.com/questions/25894431/checkstyle-different-rules-for-different-files
>
> So something like this for Hadoop:
>
> 
>
> I will test it this approach.
>
>
> John Zhuge
> Software Engineer, Cloudera
>
> On Mon, May 16, 2016 at 9:39 AM, Chris Nauroth 
> wrote:
>
> > I'm in favor of disabling this, but I'm not sure how.  It looks like
> > HADOOP-12701 recently enabled Checkstyle in src/test, so we hadn't seen
> > this before.  Unfortunately, I can't find a way to keep Checkstyle
> > generally on for src/test, but with different rules from src/main.
> >
> > http://checkstyle.sourceforge.net/config_javadoc.html
> >
> >
> > Does anyone else have an idea?
> >
> >
> > --Chris Nauroth
> >
> >
> >
> >
> > On 5/16/16, 6:34 AM, "Steve Loughran"  wrote:
> >
> > >
> > >I've got checkstyle rejecting a patch as there's no package-info.java
> > >file ‹in src/test
> > >
> > >https://issues.apache.org/jira/browse/HADOOP-13130
> > >
> > >I'm happy to argue the merits of the package-info files in production
> > >code: they can be good, if people put in the effort to write and
> > >maintain, but not for tests. Can we get this turned off?
> >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
>
>


Build failed in Jenkins: Hadoop-common-trunk-Java8 #1480

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HADOOP-13148. TestDistCpViewFs to include IOExceptions in test error

[cnauroth] HADOOP-13149. Windows distro build fails on dist-copynativelibs.

--
[...truncated 5591 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.408 sec - in 
org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestSecurityUtil
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.663 sec - in 
org.apache.hadoop.security.TestSecurityUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestHttpCrossOriginFilterInitializer
Running org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.115 sec - in 
org.apache.hadoop.metrics2.sink.TestRollingFileSystemSink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.303 sec - in 
org.apache.hadoop.security.TestHttpCrossOriginFilterInitializer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.583 sec - in 
org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.744 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.859 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.367 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.677 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.TestCredentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.509 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.761 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.58 sec - in 
org.apache.hadoop.security.TestCredentials
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestWhitelistBasedResolver
Running org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.TestDoAsEffectiveUser
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.444 sec - in 
org.apache.hadoop.security.TestWhitelistBasedResolver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestShellBasedUnixGroupsMapping
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - in 
org.apache.hadoop.security.TestShellBasedUnixGroupsMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.262 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring 

[jira] [Created] (HADOOP-13157) Follow-on improvements to HADOOP-12942

2016-05-16 Thread Mike Yoder (JIRA)
Mike Yoder created HADOOP-13157:
---

 Summary: Follow-on improvements to HADOOP-12942
 Key: HADOOP-13157
 URL: https://issues.apache.org/jira/browse/HADOOP-13157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Mike Yoder
Assignee: Mike Yoder


[~andrew.wang] had some follow-up code review comments from HADOOP-12942. Hence 
this issue.

Ping [~lmccay] as well.  

The comments:

{quote}
Overall this looks okay, the only correctness question I have is about the 
difference in behavior when the pwfile doesn't exist.

The rest are all nits, would be nice to do these cleanups though.

File 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java:

Line 147:
Could this be a static helper?

Line 161: new
The javadoc says it returns null in this situation. This is also a difference 
from the implementation in the AbstractJKSP. Intentional?

Line 175:   private void locateKeystore() throws IOException {
static helper? for the construct*Path methods too?

File 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java:

Line 50:   @VisibleForTesting public static final String NO_VALID_PROVIDERS =
FYI for the future, our coding style is to put annotations on their own 
separate line.

File 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java:

Line 326:   private char[] locatePassword() throws IOException {
this method looks very similar to the one in JavaKeyStoreProvider, except the 
env var it looks for is different, is there potential for code reuse?

Line 394:   "o In the environment variable " +
Using a "*" is the usual way of doing a bullet point, e.g. markdown and wiki 
syntax.

Line 399:   
"http://hadoop.apache.org/docs/current/hadoop-project-dist/; +
This link is not tied to a version, so could be inaccurate.
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: checkstyle and package-info

2016-05-16 Thread Bokor Andras
In Haddop JUnit tests not always follow Test.*.java pattern. Instead I suggest 
the following suppress: 
This needs to be tested on Windows due to the slashes.
John Zhuge  írta:
>http://stackoverflow.com/questions/5871020/different-checkstyle-rules-for-main-and-test-in-maven
>http://stackoverflow.com/questions/25894431/checkstyle-different-rules-for-different-files
>
>So something like this for Hadoop:
>
>
>
>I will test it this approach.
>
>
>John Zhuge
>Software Engineer, Cloudera
>
>On Mon, May 16, 2016 at 9:39 AM, Chris Nauroth 
>wrote:
>
>> Im in favor of disabling this, but Im not sure how.  It looks like
>> HADOOP-12701 recently enabled Checkstyle in src/test, so we hadnt seen
>> this before.  Unfortunately, I cant find a way to keep Checkstyle
>> generally on for src/test, but with different rules from src/main.
>>
>> http://checkstyle.sourceforge.net/config_javadoc.html
>>
>>
>> Does anyone else have an idea?
>>
>>
>> --Chris Nauroth
>>
>>
>>
>>
>> On 5/16/16, 6:34 AM, "Steve Loughran"  wrote:
>>
>> >
>> >Ive got checkstyle rejecting a patch as theres no 
>> >package-info.java
>> >file in src/test
>> >
>> >https://issues.apache.org/jira/browse/HADOOP-13130
>> >
>> >Im happy to argue the merits of the package-info files in production
>> >code: they can be good, if people put in the effort to write and
>> >maintain, but not for tests. Can we get this turned off?
>>
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>


[jira] [Created] (HADOOP-13156) create-release.sh doesn't work for branch-2.8

2016-05-16 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-13156:
---

 Summary: create-release.sh doesn't work for branch-2.8
 Key: HADOOP-13156
 URL: https://issues.apache.org/jira/browse/HADOOP-13156
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Wangda Tan
Priority: Blocker


A couple of issues found while trying to run dev-support/create-release.sh:

1) Missing files like release-notes.html and CHANGE.txt
2) After remove lines to copy release-notes.html/CHANGE.txt, still saw some 
issues, for example:

{code}
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
   cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file ... 
target_directory

Failed! running cp -r ../../target/r2.8.0-SNAPSHOT/api 
../../target/r2.8.0-SNAPSHOT/css 
../../target/r2.8.0-SNAPSHOT/dependency-analysis.html 
../../target/r2.8.0-SNAPSHOT/hadoop-annotations 
../../target/r2.8.0-SNAPSHOT/hadoop-ant 
../../target/r2.8.0-SNAPSHOT/hadoop-archive-logs 
../../target/r2.8.0-SNAPSHOT/hadoop-archives 
../../target/r2.8.0-SNAPSHOT/hadoop-assemblies 
../../target/r2.8.0-SNAPSHOT/hadoop-auth 
../../target/r2.8.0-SNAPSHOT/hadoop-auth-examples 
../../target/r2.8.0-SNAPSHOT/hadoop-aws 
../../target/r2.8.0-SNAPSHOT/hadoop-azure 
../../target/r2.8.0-SNAPSHOT/hadoop-common-project 
../../target/r2.8.0-SNAPSHOT/hadoop-datajoin 
../../target/r2.8.0-SNAPSHOT/hadoop-dist 
../../target/r2.8.0-SNAPSHOT/hadoop-distcp 
../../target/r2.8.0-SNAPSHOT/hadoop-extras 
../../target/r2.8.0-SNAPSHOT/hadoop-gridmix 
../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-bkjournal 
../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-httpfs 
../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-nfs 
../../target/r2.8.0-SNAPSHOT/hadoop-hdfs-project 
../../target/r2.8.0-SNAPSHOT/hadoop-kms 
../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce 
../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce-client 
../../target/r2.8.0-SNAPSHOT/hadoop-mapreduce-examples 
../../target/r2.8.0-SNAPSHOT/hadoop-maven-plugins 
../../target/r2.8.0-SNAPSHOT/hadoop-minicluster 
../../target/r2.8.0-SNAPSHOT/hadoop-minikdc 
../../target/r2.8.0-SNAPSHOT/hadoop-nfs 
../../target/r2.8.0-SNAPSHOT/hadoop-openstack 
../../target/r2.8.0-SNAPSHOT/hadoop-pipes 
../../target/r2.8.0-SNAPSHOT/hadoop-project-dist 
../../target/r2.8.0-SNAPSHOT/hadoop-rumen 
../../target/r2.8.0-SNAPSHOT/hadoop-sls 
../../target/r2.8.0-SNAPSHOT/hadoop-streaming 
../../target/r2.8.0-SNAPSHOT/hadoop-tools 
../../target/r2.8.0-SNAPSHOT/hadoop-yarn 
../../target/r2.8.0-SNAPSHOT/hadoop-yarn-project 
../../target/r2.8.0-SNAPSHOT/images ../../target/r2.8.0-SNAPSHOT/index.html 
../../target/r2.8.0-SNAPSHOT/project-reports.html 
hadoop-2.8.0-SNAPSHOT/share/doc/hadoop/ in 
/Users/wtan/sandbox/hadoop-erie-copy/hadoop-dist/target
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: checkstyle and package-info

2016-05-16 Thread John Zhuge
http://stackoverflow.com/questions/5871020/different-checkstyle-rules-for-main-and-test-in-maven
http://stackoverflow.com/questions/25894431/checkstyle-different-rules-for-different-files

So something like this for Hadoop:



I will test it this approach.


John Zhuge
Software Engineer, Cloudera

On Mon, May 16, 2016 at 9:39 AM, Chris Nauroth 
wrote:

> I'm in favor of disabling this, but I'm not sure how.  It looks like
> HADOOP-12701 recently enabled Checkstyle in src/test, so we hadn't seen
> this before.  Unfortunately, I can't find a way to keep Checkstyle
> generally on for src/test, but with different rules from src/main.
>
> http://checkstyle.sourceforge.net/config_javadoc.html
>
>
> Does anyone else have an idea?
>
>
> --Chris Nauroth
>
>
>
>
> On 5/16/16, 6:34 AM, "Steve Loughran"  wrote:
>
> >
> >I've got checkstyle rejecting a patch as there's no package-info.java
> >file ‹in src/test
> >
> >https://issues.apache.org/jira/browse/HADOOP-13130
> >
> >I'm happy to argue the merits of the package-info files in production
> >code: they can be good, if people put in the effort to write and
> >maintain, but not for tests. Can we get this turned off?
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HADOOP-13155) Implement TokenRenewer in KMS and HttpFS

2016-05-16 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13155:
--

 Summary: Implement TokenRenewer in KMS and HttpFS
 Key: HADOOP-13155
 URL: https://issues.apache.org/jira/browse/HADOOP-13155
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiao Chen


DelegationToken is done in Yarn by {{DelegationTokenRenewer}}, where it calls 
{{Token#renew}} and uses ServiceLoader to get the renewer class 
([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382])

We seem to miss the token renewer class in KMS/HttpFSFileSystem, and hence Yarn 
defaults to {{TrivialRenewer}} for DT of these kinds, resulting in the token 
not renewed.

As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} API, 
but I don't see it invoked in hadoop code base. KMS does not have any renew 
hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge feature branch HADOOP-12930

2016-05-16 Thread Chris Nauroth
Understood about the tests.

--Chris Nauroth




On 5/15/16, 7:30 AM, "Allen Wittenauer"  wrote:

>
>> On May 14, 2016, at 3:11 PM, Chris Nauroth 
>>wrote:
>> 
>> +1 (binding)
>> 
>> -Tried a dry-run merge of HADOOP-12930 to trunk.
>> -Successfully built distro on Windows.
>> -Ran "hdfs namenode", "hdfs datanode", and various interactive hdfs
>> commands through Cygwin.
>> -Reviewed documentation.
>> 
>> Allen, thank you for the contribution.  Would you please attach a full
>> patch to HADOOP-12930 to check pre-commit results?
>
>
>   Nope.  The whole reason this was done as a branch with multiple patches
>was to prevent Jenkins from getting overwhelmed since it would trigger
>full unit tests on pretty much the entire code base….
>
>> While testing this, I discovered a bug in the distro build for Windows.
>> Could someone please code review my patch on HADOOP-13149?
>
>   Done!
>
>> 
>> --Chris Nauroth
>> 
>> 
>> 
>> 
>> On 5/9/16, 1:26 PM, "Allen Wittenauer"  wrote:
>> 
>>> 
>>> Hey gang!
>>> 
>>> I¹d like to call a vote to run for 7 days (ending May 16 at 13:30 PT)
>>>to
>>> merge the HADOOP-12930 feature branch into trunk. This branch was
>>> developed exclusively by me as per the discussion two months ago as a
>>>way
>>> to make what would be a rather large patch hopefully easier to review.
>>> The vast majority of the branch is code movement in the same file,
>>> additional license headers, maven assembly hooks for distribution, and
>>> variable renames. Not a whole lot of new code, but a big diff file
>>> none-the-less.
>>> 
>>> This branch modifies the Œhadoop¹, Œhdfs¹, Œmapred¹, and Œyarn¹
>>>commands
>>> to allow for subcommands to be added or modified at runtime.  This
>>>allows
>>> for individual users or entire sites to tweak the execution environment
>>> to suit their local needs.  For example, it has been a practice for
>>>some
>>> locations to change the distcp jar out for a custom one.  Using this
>>> functionality, it is possible that the Œhadoop distcp¹ command could
>>>run
>>> the local version without overwriting the bundled jar and for existing
>>> documentation (read: results from Internet searches) to work as written
>>> without modification. This has the potential to be a huge win,
>>>especially
>>> for:
>>> 
>>> * advanced end users looking to supplement the Apache Hadoop
>>>experience
>>> * operations teams that may be able to leverage existing
>>>documentation
>>> without having to remain local ³exception² docs
>>> * development groups wanting an easy way to trial experimental
>>>features
>>> 
>>> Additionally, this branch includes the following, related changes:
>>> 
>>> * Adds the first unit tests for the Œhadoop¹ command
>>> * Adds the infrastructure for hdfs script testing and the first 
>>> unit
>>> test for the Œhdfs¹ command
>>> * Modifies the hadoop-tools components to be dynamic rather 
>>> than hard
>>> coded
>>> * Renames the shell profiles for hdfs, mapred, and yarn to be
>>> consistent with other bundled profiles, including the ones introduced
>>>in
>>> this branch
>>> 
>>> Documentation, including a Œhello world¹-style example, is in the
>>> UnixShellGuide markdown file.  (Of course!)
>>> 
>>>  I am at ApacheCon this week if anyone wants to discuss in-depth.
>>> 
>>> Thanks!
>>> 
>>> P.S.,
>>> 
>>> There are still two open sub-tasks.  These are blocked by other issues
>>> so that we may add unit testing to the shell code in those respective
>>> areas.  I¹ll covert to full issues after HADOOP-12930 is closed.
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>> 
>>> 
>> 
>
>


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: checkstyle and package-info

2016-05-16 Thread Chris Nauroth
I'm in favor of disabling this, but I'm not sure how.  It looks like
HADOOP-12701 recently enabled Checkstyle in src/test, so we hadn't seen
this before.  Unfortunately, I can't find a way to keep Checkstyle
generally on for src/test, but with different rules from src/main.

http://checkstyle.sourceforge.net/config_javadoc.html


Does anyone else have an idea?


--Chris Nauroth




On 5/16/16, 6:34 AM, "Steve Loughran"  wrote:

>
>I've got checkstyle rejecting a patch as there's no package-info.java
>file ‹in src/test
>
>https://issues.apache.org/jira/browse/HADOOP-13130
>
>I'm happy to argue the merits of the package-info files in production
>code: they can be good, if people put in the effort to write and
>maintain, but not for tests. Can we get this turned off?


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Common-trunk #2767

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[jlowe] YARN-4325. Nodemanager log handlers fail to send finished/failed events

--
[...truncated 5161 lines...]
Running org.apache.hadoop.conf.TestReconfiguration
Tests run: 62, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.295 sec - in 
org.apache.hadoop.conf.TestConfiguration
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.729 sec - in 
org.apache.hadoop.conf.TestConfigurationDeprecation
Running org.apache.hadoop.conf.TestConfServlet
Running org.apache.hadoop.conf.TestCommonConfigurationFields
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.89 sec - in 
org.apache.hadoop.conf.TestConfServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.738 sec - in 
org.apache.hadoop.conf.TestCommonConfigurationFields
Running org.apache.hadoop.test.TestJUnitSetup
Running org.apache.hadoop.test.TestMultithreadedTestUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.601 sec - in 
org.apache.hadoop.conf.TestReconfiguration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec - in 
org.apache.hadoop.test.TestJUnitSetup
Running org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.test.TestGenericTestUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.2 sec - in 
org.apache.hadoop.test.TestGenericTestUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec - in 
org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.net.TestNetUtils
Running org.apache.hadoop.net.TestDNS
Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.41 sec <<< 
FAILURE! - in org.apache.hadoop.net.TestDNS
testNullDnsServer(org.apache.hadoop.net.TestDNS)  Time elapsed: 0.085 sec  <<< 
FAILURE!
java.lang.AssertionError: 
Expected: is "asf906.gq1.ygridcore.net"
 but: was "localhost"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at org.apache.hadoop.net.TestDNS.testNullDnsServer(TestDNS.java:124)

Running org.apache.hadoop.net.TestSocketIOWithTimeout
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.187 sec - in 
org.apache.hadoop.test.TestMultithreadedTestUtil
Running org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.205 sec - in 
org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Running org.apache.hadoop.net.TestClusterTopology
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.235 sec - in 
org.apache.hadoop.net.TestClusterTopology
Running org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.075 sec - in 
org.apache.hadoop.net.TestNetUtils
Running org.apache.hadoop.net.TestTableMapping
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.845 sec - in 
org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Running org.apache.hadoop.net.TestScriptBasedMapping
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.675 sec - in 
org.apache.hadoop.net.TestTableMapping
Running org.apache.hadoop.net.unix.TestDomainSocketWatcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.767 sec - in 
org.apache.hadoop.net.TestScriptBasedMapping
Running org.apache.hadoop.net.unix.TestDomainSocket
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.633 sec - in 
org.apache.hadoop.net.unix.TestDomainSocketWatcher
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.394 sec - in 
org.apache.hadoop.net.TestSocketIOWithTimeout
Running org.apache.hadoop.net.TestSwitchMapping
Running org.apache.hadoop.net.TestStaticMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.469 sec - in 
org.apache.hadoop.net.TestSwitchMapping
Running org.apache.hadoop.cli.TestCLI
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.161 sec - in 
org.apache.hadoop.net.TestStaticMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.685 sec - in 
org.apache.hadoop.cli.TestCLI
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.485 sec - in 
org.apache.hadoop.metrics2.lib.TestMutableMetrics
Running org.apache.hadoop.io.TestSortedMapWritable
Running org.apache.hadoop.io.TestIOUtils
Running org.apache.hadoop.io.TestSequenceFile
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.251 sec - in 
org.apache.hadoop.io.TestSortedMapWritable
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.483 sec - in 
org.apache.hadoop.io.TestIOUtils
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.911 sec - in 

Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Zhihai Xu
Great, Thanks Junping! Yes, the JIRA assignment works for me now.

zhihai

On Mon, May 16, 2016 at 5:29 AM, Junping Du  wrote:

> Zhihai, I just set you with committer permissions on MAPREDUCE JIRA. Would
> you try if the JIRA assignment works now? I cannot help on Hive project. It
> is better to ask hive project community for help.
> For Arun's problem. from my check, the Edit permission on JIRA only
> authorized to Administrator only. I don't know if this setting is by
> intention but it was not like this previously.
> Can someone who make the change to clarify why we need this change or
> revert to whatever it used to be?
>
> Thanks,
>
> Junping
> 
> From: Arun Suresh 
> Sent: Monday, May 16, 2016 9:42 AM
> To: Zhihai Xu
> Cc: Zheng, Kai; Andrew Wang; common-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org
> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>
> Not sure if this is related.. but It also looks like I am now no longer
> allowed to modify description and headline of JIRAs anymore..
> Would appreciate greatly if someone can help revert this !
>
> Cheers
> -Arun
>
> On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu  wrote:
>
> > Currently I also have permission issue to access the JIRA. I can't assign
> > the JIRA(I created) to myself. For example,
> > https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
> > https://issues.apache.org/jira/browse/HIVE-13760. I can't find the
> button
> > to assign the JIRA to myself.
> > I don't have this issue two three weeks ago. Did anything change
> recently?
> > Can anyone help me solve this issue?
> >
> > thanks
> > zhihai
> >
> >
> >
> >
> > On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai 
> wrote:
> >
> > > It works for me now, thanks Andrew!
> > >
> > > Regards,
> > > Kai
> > >
> > > -Original Message-
> > > From: Andrew Wang [mailto:andrew.w...@cloudera.com]
> > > Sent: Monday, May 16, 2016 12:14 AM
> > > To: Zheng, Kai 
> > > Cc: common-dev@hadoop.apache.org
> > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> > >
> > > I just gave you committer permissions on JIRA, try now?
> > >
> > > On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 
> > wrote:
> > >
> > > > I just ran into the bad situation that I committed HDFS-8449 but
> can't
> > > > resolve the issue due to lacking the required permission to me. Am
> not
> > > > sure if it's caused by my setup or environment change (temporally
> > > > working in a new time zone). Would anyone help resolve the issue for
> > > > me to avoid bad state? Thanks!
> > > >
> > > > -Original Message-
> > > > From: Zheng, Kai [mailto:kai.zh...@intel.com]
> > > > Sent: Sunday, May 15, 2016 3:20 PM
> > > > To: Allen Wittenauer 
> > > > Cc: common-dev@hadoop.apache.org
> > > > Subject: RE: Different JIRA permissions for HADOOP and HDFS
> > > >
> > > > Thanks Allen for illustrating this in details. I understand. The left
> > > > question is, is it intended only JIRA owner (not sure about admin
> > > > users) can do the operations like updating a patch?
> > > >
> > > > Regards,
> > > > Kai
> > > >
> > > > -Original Message-
> > > > From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
> > > > Sent: Saturday, May 14, 2016 9:38 AM
> > > > To: Zheng, Kai 
> > > > Cc: common-dev@hadoop.apache.org
> > > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> > > >
> > > >
> > > > > On May 14, 2016, at 7:07 AM, Zheng, Kai 
> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > Noticed this difference but not sure if it’s intended. YARN is
> > > > > similar
> > > > with HDFS. It’s not convenient. Any clarifying?
> > > >
> > > >
> > > > Under JIRA, different projects (e.g., HADOOP, YARN,
> MAPREDUCE,
> > > > HDFS, YETUS, HBASE, ACCUMULO, etc) may have different settings.  At
> > > > one point in time, all of the Hadoop subprojects were under one JIRA
> > > > project (HADOOP). But then a bunch of folks decided they didn’t want
> > > > to see the other sub projects issues so they split them up…. and thus
> > > > setting the stage for duplicate code and operational divergence in
> the
> > > source.
> > > >
> > > > Since people don’t realize or care that they are separate,
> > > > people will file INFRA tickets or whatever to change “their project”
> > > > and not the rest. This leads to the JIRA projects also diverging…
> > > > which ultimately drives those of us who actually look at the project
> as
> > > a whole bonkers.
> > > > -
> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > > >
> > > >
> > > > 

checkstyle and package-info

2016-05-16 Thread Steve Loughran

I've got checkstyle rejecting a patch as there's no package-info.java file —in 
src/test

https://issues.apache.org/jira/browse/HADOOP-13130

I'm happy to argue the merits of the package-info files in production code: 
they can be good, if people put in the effort to write and maintain, but not 
for tests. Can we get this turned off?


[jira] [Created] (HADOOP-13154) S3AFileSystem printAmazonServiceException/printAmazonClientException appear copy & paste of AWS examples

2016-05-16 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13154:
---

 Summary: S3AFileSystem 
printAmazonServiceException/printAmazonClientException appear copy & paste of 
AWS examples
 Key: HADOOP-13154
 URL: https://issues.apache.org/jira/browse/HADOOP-13154
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.2
Reporter: Steve Loughran
Priority: Blocker


The logging code in {{S3AFileSystem.printAmazonServiceException()}} and 
{{printAmazonClientException}} appear to be paste + edits of the example code 
in  the amazon SDK, such as 
[http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingObjectKeysUsingJava.html]]

Either we review the license to validate it, and add credits to the code if 
compatible, or we rework. HADOOP-13130 would be the place to do that, as it is 
changing exception handling anyway.

tagging as blocker as it is license related



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge feature branch HADOOP-12930

2016-05-16 Thread Akira AJISAKA

+1.

- Checked out HADOOP-12930
- Built by "mvn package -Pdist -DskipTests -Dtar" successfully
- Added custom subcommands to ~/.hadooprc and confirmed they worked
- Built documentation and it looks good

Thanks,
Akira

On 5/15/16 02:33, Allen Wittenauer wrote:


This vote closes in 2 days and the only response has been from a non-committer 
and one of the 137 other committers on the project…. it’d be great if some 
others could take a look.

Thanks!


On May 12, 2016, at 6:07 PM, Andrew Wang  wrote:

+1. I looked at the patches on the branch, wasn't too bad to review. As
Allen said, there's some code movement, assorted other nice doc and shell
fixups.

Found one extra typo, which I added to HADOOP-13129.

Best,
Andrew

On Wed, May 11, 2016 at 1:14 AM, Sean Busbey  wrote:


+1 (non-binding)

reviewed everything, filed an additional subtask for a very trivial
typo in the docs. should be fine to make a full issue after close and
then fix.

tried merging locally, tried running through new shell tests (both
with and without bats installed), tried making an example custom
command (valid and malformed). everything looks great.

On Mon, May 9, 2016 at 1:26 PM, Allen Wittenauer  wrote:


   Hey gang!

   I’d like to call a vote to run for 7 days (ending May 16 at

13:30 PT) to merge the HADOOP-12930 feature branch into trunk. This branch
was developed exclusively by me as per the discussion two months ago as a
way to make what would be a rather large patch hopefully easier to review.
The vast majority of the branch is code movement in the same file,
additional license headers, maven assembly hooks for distribution, and
variable renames. Not a whole lot of new code, but a big diff file
none-the-less.


   This branch modifies the ‘hadoop’, ‘hdfs’, ‘mapred’, and ‘yarn’

commands to allow for subcommands to be added or modified at runtime.  This
allows for individual users or entire sites to tweak the execution
environment to suit their local needs.  For example, it has been a practice
for some locations to change the distcp jar out for a custom one.  Using
this functionality, it is possible that the ‘hadoop distcp’ command could
run the local version without overwriting the bundled jar and for existing
documentation (read: results from Internet searches) to work as written
without modification. This has the potential to be a huge win, especially
for:


   * advanced end users looking to supplement the Apache

Hadoop experience

   * operations teams that may be able to leverage existing

documentation without having to remain local “exception” docs

   * development groups wanting an easy way to trial

experimental features


   Additionally, this branch includes the following, related

changes:


   * Adds the first unit tests for the ‘hadoop’ command
   * Adds the infrastructure for hdfs script testing and

the first unit test for the ‘hdfs’ command

   * Modifies the hadoop-tools components to be dynamic

rather than hard coded

   * Renames the shell profiles for hdfs, mapred, and yarn

to be consistent with other bundled profiles, including the ones introduced
in this branch


   Documentation, including a ‘hello world’-style example, is in

the UnixShellGuide markdown file.  (Of course!)


I am at ApacheCon this week if anyone wants to discuss in-depth.

   Thanks!

P.S.,

   There are still two open sub-tasks.  These are blocked by other

issues so that we may add unit testing to the shell code in those
respective areas.  I’ll covert to full issues after HADOOP-12930 is closed.



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





--
busbey

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org





-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Junping Du
Zhihai, I just set you with committer permissions on MAPREDUCE JIRA. Would you 
try if the JIRA assignment works now? I cannot help on Hive project. It is 
better to ask hive project community for help.
For Arun's problem. from my check, the Edit permission on JIRA only authorized 
to Administrator only. I don't know if this setting is by intention but it was 
not like this previously. 
Can someone who make the change to clarify why we need this change or revert to 
whatever it used to be?

Thanks,

Junping

From: Arun Suresh 
Sent: Monday, May 16, 2016 9:42 AM
To: Zhihai Xu
Cc: Zheng, Kai; Andrew Wang; common-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

Not sure if this is related.. but It also looks like I am now no longer
allowed to modify description and headline of JIRAs anymore..
Would appreciate greatly if someone can help revert this !

Cheers
-Arun

On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu  wrote:

> Currently I also have permission issue to access the JIRA. I can't assign
> the JIRA(I created) to myself. For example,
> https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
> https://issues.apache.org/jira/browse/HIVE-13760. I can't find the button
> to assign the JIRA to myself.
> I don't have this issue two three weeks ago. Did anything change recently?
> Can anyone help me solve this issue?
>
> thanks
> zhihai
>
>
>
>
> On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai  wrote:
>
> > It works for me now, thanks Andrew!
> >
> > Regards,
> > Kai
> >
> > -Original Message-
> > From: Andrew Wang [mailto:andrew.w...@cloudera.com]
> > Sent: Monday, May 16, 2016 12:14 AM
> > To: Zheng, Kai 
> > Cc: common-dev@hadoop.apache.org
> > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> >
> > I just gave you committer permissions on JIRA, try now?
> >
> > On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 
> wrote:
> >
> > > I just ran into the bad situation that I committed HDFS-8449 but can't
> > > resolve the issue due to lacking the required permission to me. Am not
> > > sure if it's caused by my setup or environment change (temporally
> > > working in a new time zone). Would anyone help resolve the issue for
> > > me to avoid bad state? Thanks!
> > >
> > > -Original Message-
> > > From: Zheng, Kai [mailto:kai.zh...@intel.com]
> > > Sent: Sunday, May 15, 2016 3:20 PM
> > > To: Allen Wittenauer 
> > > Cc: common-dev@hadoop.apache.org
> > > Subject: RE: Different JIRA permissions for HADOOP and HDFS
> > >
> > > Thanks Allen for illustrating this in details. I understand. The left
> > > question is, is it intended only JIRA owner (not sure about admin
> > > users) can do the operations like updating a patch?
> > >
> > > Regards,
> > > Kai
> > >
> > > -Original Message-
> > > From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
> > > Sent: Saturday, May 14, 2016 9:38 AM
> > > To: Zheng, Kai 
> > > Cc: common-dev@hadoop.apache.org
> > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> > >
> > >
> > > > On May 14, 2016, at 7:07 AM, Zheng, Kai  wrote:
> > > >
> > > > Hi,
> > > >
> > > > Noticed this difference but not sure if it’s intended. YARN is
> > > > similar
> > > with HDFS. It’s not convenient. Any clarifying?
> > >
> > >
> > > Under JIRA, different projects (e.g., HADOOP, YARN, MAPREDUCE,
> > > HDFS, YETUS, HBASE, ACCUMULO, etc) may have different settings.  At
> > > one point in time, all of the Hadoop subprojects were under one JIRA
> > > project (HADOOP). But then a bunch of folks decided they didn’t want
> > > to see the other sub projects issues so they split them up…. and thus
> > > setting the stage for duplicate code and operational divergence in the
> > source.
> > >
> > > Since people don’t realize or care that they are separate,
> > > people will file INFRA tickets or whatever to change “their project”
> > > and not the rest. This leads to the JIRA projects also diverging…
> > > which ultimately drives those of us who actually look at the project as
> > a whole bonkers.
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>


[jira] [Resolved] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-16 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay resolved HADOOP-12942.
--
Resolution: Fixed

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-16 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay reopened HADOOP-12942:
--

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Steve Loughran

> On 16 May 2016, at 09:21, Zhihai Xu  wrote:
> 
> Currently I also have permission issue to access the JIRA. I can't assign
> the JIRA(I created) to myself. For example,
> https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
> https://issues.apache.org/jira/browse/HIVE-13760. I can't find the button
> to assign the JIRA to myself.
> I don't have this issue two three weeks ago. Did anything change recently?
> Can anyone help me solve this issue?
> 

As I warned on another patch, they've cranked back a lot on permissions due to 
spam problems. I think you may need to be tagged as a contributor to assign 
work to yourself or maybe upload patches.

don't worry about the assigned bit ... if you keep submitting patches and talk 
about it —then it's clear you are working on it. Patch submission though, that 
will be needed. I've been having problems there even though I have full rights

Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Arun Suresh
Not sure if this is related.. but It also looks like I am now no longer
allowed to modify description and headline of JIRAs anymore..
Would appreciate greatly if someone can help revert this !

Cheers
-Arun

On Mon, May 16, 2016 at 1:21 AM, Zhihai Xu  wrote:

> Currently I also have permission issue to access the JIRA. I can't assign
> the JIRA(I created) to myself. For example,
> https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
> https://issues.apache.org/jira/browse/HIVE-13760. I can't find the button
> to assign the JIRA to myself.
> I don't have this issue two three weeks ago. Did anything change recently?
> Can anyone help me solve this issue?
>
> thanks
> zhihai
>
>
>
>
> On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai  wrote:
>
> > It works for me now, thanks Andrew!
> >
> > Regards,
> > Kai
> >
> > -Original Message-
> > From: Andrew Wang [mailto:andrew.w...@cloudera.com]
> > Sent: Monday, May 16, 2016 12:14 AM
> > To: Zheng, Kai 
> > Cc: common-dev@hadoop.apache.org
> > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> >
> > I just gave you committer permissions on JIRA, try now?
> >
> > On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai 
> wrote:
> >
> > > I just ran into the bad situation that I committed HDFS-8449 but can't
> > > resolve the issue due to lacking the required permission to me. Am not
> > > sure if it's caused by my setup or environment change (temporally
> > > working in a new time zone). Would anyone help resolve the issue for
> > > me to avoid bad state? Thanks!
> > >
> > > -Original Message-
> > > From: Zheng, Kai [mailto:kai.zh...@intel.com]
> > > Sent: Sunday, May 15, 2016 3:20 PM
> > > To: Allen Wittenauer 
> > > Cc: common-dev@hadoop.apache.org
> > > Subject: RE: Different JIRA permissions for HADOOP and HDFS
> > >
> > > Thanks Allen for illustrating this in details. I understand. The left
> > > question is, is it intended only JIRA owner (not sure about admin
> > > users) can do the operations like updating a patch?
> > >
> > > Regards,
> > > Kai
> > >
> > > -Original Message-
> > > From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
> > > Sent: Saturday, May 14, 2016 9:38 AM
> > > To: Zheng, Kai 
> > > Cc: common-dev@hadoop.apache.org
> > > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> > >
> > >
> > > > On May 14, 2016, at 7:07 AM, Zheng, Kai  wrote:
> > > >
> > > > Hi,
> > > >
> > > > Noticed this difference but not sure if it’s intended. YARN is
> > > > similar
> > > with HDFS. It’s not convenient. Any clarifying?
> > >
> > >
> > > Under JIRA, different projects (e.g., HADOOP, YARN, MAPREDUCE,
> > > HDFS, YETUS, HBASE, ACCUMULO, etc) may have different settings.  At
> > > one point in time, all of the Hadoop subprojects were under one JIRA
> > > project (HADOOP). But then a bunch of folks decided they didn’t want
> > > to see the other sub projects issues so they split them up…. and thus
> > > setting the stage for duplicate code and operational divergence in the
> > source.
> > >
> > > Since people don’t realize or care that they are separate,
> > > people will file INFRA tickets or whatever to change “their project”
> > > and not the rest. This leads to the JIRA projects also diverging…
> > > which ultimately drives those of us who actually look at the project as
> > a whole bonkers.
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>


Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Zhihai Xu
Currently I also have permission issue to access the JIRA. I can't assign
the JIRA(I created) to myself. For example,
https://issues.apache.org/jira/browse/MAPREDUCE-6696 and
https://issues.apache.org/jira/browse/HIVE-13760. I can't find the button
to assign the JIRA to myself.
I don't have this issue two three weeks ago. Did anything change recently?
Can anyone help me solve this issue?

thanks
zhihai




On Mon, May 16, 2016 at 12:22 AM, Zheng, Kai  wrote:

> It works for me now, thanks Andrew!
>
> Regards,
> Kai
>
> -Original Message-
> From: Andrew Wang [mailto:andrew.w...@cloudera.com]
> Sent: Monday, May 16, 2016 12:14 AM
> To: Zheng, Kai 
> Cc: common-dev@hadoop.apache.org
> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>
> I just gave you committer permissions on JIRA, try now?
>
> On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai  wrote:
>
> > I just ran into the bad situation that I committed HDFS-8449 but can't
> > resolve the issue due to lacking the required permission to me. Am not
> > sure if it's caused by my setup or environment change (temporally
> > working in a new time zone). Would anyone help resolve the issue for
> > me to avoid bad state? Thanks!
> >
> > -Original Message-
> > From: Zheng, Kai [mailto:kai.zh...@intel.com]
> > Sent: Sunday, May 15, 2016 3:20 PM
> > To: Allen Wittenauer 
> > Cc: common-dev@hadoop.apache.org
> > Subject: RE: Different JIRA permissions for HADOOP and HDFS
> >
> > Thanks Allen for illustrating this in details. I understand. The left
> > question is, is it intended only JIRA owner (not sure about admin
> > users) can do the operations like updating a patch?
> >
> > Regards,
> > Kai
> >
> > -Original Message-
> > From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
> > Sent: Saturday, May 14, 2016 9:38 AM
> > To: Zheng, Kai 
> > Cc: common-dev@hadoop.apache.org
> > Subject: Re: Different JIRA permissions for HADOOP and HDFS
> >
> >
> > > On May 14, 2016, at 7:07 AM, Zheng, Kai  wrote:
> > >
> > > Hi,
> > >
> > > Noticed this difference but not sure if it’s intended. YARN is
> > > similar
> > with HDFS. It’s not convenient. Any clarifying?
> >
> >
> > Under JIRA, different projects (e.g., HADOOP, YARN, MAPREDUCE,
> > HDFS, YETUS, HBASE, ACCUMULO, etc) may have different settings.  At
> > one point in time, all of the Hadoop subprojects were under one JIRA
> > project (HADOOP). But then a bunch of folks decided they didn’t want
> > to see the other sub projects issues so they split them up…. and thus
> > setting the stage for duplicate code and operational divergence in the
> source.
> >
> > Since people don’t realize or care that they are separate,
> > people will file INFRA tickets or whatever to change “their project”
> > and not the rest. This leads to the JIRA projects also diverging…
> > which ultimately drives those of us who actually look at the project as
> a whole bonkers.
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>


Build failed in Jenkins: Hadoop-Common-trunk #2766

2016-05-16 Thread Apache Jenkins Server
See 

Changes:

[kai.zheng] HDFS-8449. Add tasks count metrics to datanode for ECWorker. 
Contributed

--
[...truncated 4050 lines...]
[ERROR] Artifact: jdk.tools:jdk.tools:jar:1.7 has no file.
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-auth ---
[INFO] 
Loading source files for package org.apache.hadoop.util...
Loading source files for package 
org.apache.hadoop.security.authentication.util...
Loading source files for package 
org.apache.hadoop.security.authentication.server...
Loading source files for package 
org.apache.hadoop.security.authentication.client...
Constructing Javadoc information...
Standard Doclet version 1.7.0_55
Building tree for all the packages and classes...
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

RE: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Zheng, Kai
It works for me now, thanks Andrew!

Regards,
Kai

-Original Message-
From: Andrew Wang [mailto:andrew.w...@cloudera.com] 
Sent: Monday, May 16, 2016 12:14 AM
To: Zheng, Kai 
Cc: common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS

I just gave you committer permissions on JIRA, try now?

On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai  wrote:

> I just ran into the bad situation that I committed HDFS-8449 but can't 
> resolve the issue due to lacking the required permission to me. Am not 
> sure if it's caused by my setup or environment change (temporally 
> working in a new time zone). Would anyone help resolve the issue for 
> me to avoid bad state? Thanks!
>
> -Original Message-
> From: Zheng, Kai [mailto:kai.zh...@intel.com]
> Sent: Sunday, May 15, 2016 3:20 PM
> To: Allen Wittenauer 
> Cc: common-dev@hadoop.apache.org
> Subject: RE: Different JIRA permissions for HADOOP and HDFS
>
> Thanks Allen for illustrating this in details. I understand. The left 
> question is, is it intended only JIRA owner (not sure about admin 
> users) can do the operations like updating a patch?
>
> Regards,
> Kai
>
> -Original Message-
> From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
> Sent: Saturday, May 14, 2016 9:38 AM
> To: Zheng, Kai 
> Cc: common-dev@hadoop.apache.org
> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>
>
> > On May 14, 2016, at 7:07 AM, Zheng, Kai  wrote:
> >
> > Hi,
> >
> > Noticed this difference but not sure if it’s intended. YARN is 
> > similar
> with HDFS. It’s not convenient. Any clarifying?
>
>
> Under JIRA, different projects (e.g., HADOOP, YARN, MAPREDUCE, 
> HDFS, YETUS, HBASE, ACCUMULO, etc) may have different settings.  At 
> one point in time, all of the Hadoop subprojects were under one JIRA 
> project (HADOOP). But then a bunch of folks decided they didn’t want 
> to see the other sub projects issues so they split them up…. and thus 
> setting the stage for duplicate code and operational divergence in the source.
>
> Since people don’t realize or care that they are separate, 
> people will file INFRA tickets or whatever to change “their project” 
> and not the rest. This leads to the JIRA projects also diverging… 
> which ultimately drives those of us who actually look at the project as a 
> whole bonkers.
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Andrew Wang
I just gave you committer permissions on JIRA, try now?

On Mon, May 16, 2016 at 12:03 AM, Zheng, Kai  wrote:

> I just ran into the bad situation that I committed HDFS-8449 but can't
> resolve the issue due to lacking the required permission to me. Am not sure
> if it's caused by my setup or environment change (temporally working in a
> new time zone). Would anyone help resolve the issue for me to avoid bad
> state? Thanks!
>
> -Original Message-
> From: Zheng, Kai [mailto:kai.zh...@intel.com]
> Sent: Sunday, May 15, 2016 3:20 PM
> To: Allen Wittenauer 
> Cc: common-dev@hadoop.apache.org
> Subject: RE: Different JIRA permissions for HADOOP and HDFS
>
> Thanks Allen for illustrating this in details. I understand. The left
> question is, is it intended only JIRA owner (not sure about admin users)
> can do the operations like updating a patch?
>
> Regards,
> Kai
>
> -Original Message-
> From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID]
> Sent: Saturday, May 14, 2016 9:38 AM
> To: Zheng, Kai 
> Cc: common-dev@hadoop.apache.org
> Subject: Re: Different JIRA permissions for HADOOP and HDFS
>
>
> > On May 14, 2016, at 7:07 AM, Zheng, Kai  wrote:
> >
> > Hi,
> >
> > Noticed this difference but not sure if it’s intended. YARN is similar
> with HDFS. It’s not convenient. Any clarifying?
>
>
> Under JIRA, different projects (e.g., HADOOP, YARN, MAPREDUCE,
> HDFS, YETUS, HBASE, ACCUMULO, etc) may have different settings.  At one
> point in time, all of the Hadoop subprojects were under one JIRA project
> (HADOOP). But then a bunch of folks decided they didn’t want to see the
> other sub projects issues so they split them up…. and thus setting the
> stage for duplicate code and operational divergence in the source.
>
> Since people don’t realize or care that they are separate, people
> will file INFRA tickets or whatever to change “their project” and not the
> rest. This leads to the JIRA projects also diverging… which ultimately
> drives those of us who actually look at the project as a whole bonkers.
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>


RE: Different JIRA permissions for HADOOP and HDFS

2016-05-16 Thread Zheng, Kai
I just ran into the bad situation that I committed HDFS-8449 but can't resolve 
the issue due to lacking the required permission to me. Am not sure if it's 
caused by my setup or environment change (temporally working in a new time 
zone). Would anyone help resolve the issue for me to avoid bad state? Thanks!

-Original Message-
From: Zheng, Kai [mailto:kai.zh...@intel.com] 
Sent: Sunday, May 15, 2016 3:20 PM
To: Allen Wittenauer 
Cc: common-dev@hadoop.apache.org
Subject: RE: Different JIRA permissions for HADOOP and HDFS

Thanks Allen for illustrating this in details. I understand. The left question 
is, is it intended only JIRA owner (not sure about admin users) can do the 
operations like updating a patch?

Regards,
Kai

-Original Message-
From: Allen Wittenauer [mailto:allenwittena...@yahoo.com.INVALID] 
Sent: Saturday, May 14, 2016 9:38 AM
To: Zheng, Kai 
Cc: common-dev@hadoop.apache.org
Subject: Re: Different JIRA permissions for HADOOP and HDFS


> On May 14, 2016, at 7:07 AM, Zheng, Kai  wrote:
> 
> Hi,
>  
> Noticed this difference but not sure if it’s intended. YARN is similar with 
> HDFS. It’s not convenient. Any clarifying?


Under JIRA, different projects (e.g., HADOOP, YARN, MAPREDUCE, HDFS, 
YETUS, HBASE, ACCUMULO, etc) may have different settings.  At one point in 
time, all of the Hadoop subprojects were under one JIRA project (HADOOP). But 
then a bunch of folks decided they didn’t want to see the other sub projects 
issues so they split them up…. and thus setting the stage for duplicate code 
and operational divergence in the source. 

Since people don’t realize or care that they are separate, people will 
file INFRA tickets or whatever to change “their project” and not the rest. This 
leads to the JIRA projects also diverging… which ultimately drives those of us 
who actually look at the project as a whole bonkers.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [VOTE] Merge feature branch HADOOP-12930

2016-05-16 Thread Masatake Iwasaki

> Fix (with unit tests!) committed.

Thanks, Allen.

+1.

- checked out HADOOP-12930 and ran shelltest on my local
- ran 'mvn package -Pdist -Pnative -DskipTests'
- started pseudo-distributed cluster by start-dfs.sh and start-yarn.sh
- ran 'hadoop distcp' and 'mapred streaming' successfully
- created etc/hadoop/shellprofile.d/test.sh and added subcommands 
described in UnixShellGuide.md

- tested overriding built-in subcommands

Masatake Iwasaki

On 5/16/16 00:30, Allen Wittenauer wrote:

On May 15, 2016, at 7:27 AM, Allen Wittenauer  wrote:



On May 14, 2016, at 9:04 PM, Masatake Iwasaki  
wrote:

+  hadoop_debug "Calling dynamically: hadoop_subcommand_${HADOOP_SUBCMD} 
${HADOOP_SUBCMD_ARGS[$*]}”

Easy fix.  The $* should just be *.  I’ll open an issue and fix it here 
in a  sec.


Fix (with unit tests!) committed.

Thanks
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org