[jira] [Created] (HADOOP-17569) Building native code fails on Fedora 33

2021-03-07 Thread Kengo Seki (Jira)
Kengo Seki created HADOOP-17569:
---

 Summary: Building native code fails on Fedora 33
 Key: HADOOP-17569
 URL: https://issues.apache.org/jira/browse/HADOOP-17569
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, common
Reporter: Kengo Seki


I tried to build native code on Fedora 33, in which glibc 2.32 is installed by 
default, but it failed with the following error.
{code:java}
$ cat /etc/redhat-release 
Fedora release 33 (Thirty Three)
$ sudo dnf info --installed glibc
Installed Packages
Name : glibc
Version  : 2.32
Release  : 1.fc33
Architecture : x86_64
Size : 17 M
Source   : glibc-2.32-1.fc33.src.rpm
Repository   : @System
>From repo: anaconda
Summary  : The GNU libc libraries
URL  : http://www.gnu.org/software/glibc/
License  : LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ and GPLv2+ with 
exceptions and BSD and Inner-Net and ISC and Public Domain and GFDL
Description  : The glibc package contains standard libraries which are used by
 : multiple programs on the system. In order to save disk space and
 : memory, as well as to make upgrading easier, common system code 
is
 : kept in one place and shared between programs. This particular 
package
 : contains the most important sets of shared libraries: the 
standard C
 : library and the standard math library. Without these two 
libraries, a
 : Linux system will not function.

$ mvn clean compile -Pnative

...

[INFO] Running make -j 1 VERBOSE=1
[WARNING] /usr/bin/cmake 
-S/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src 
-B/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native 
--check-build-system CMakeFiles/Makefile.cmake 0
[WARNING] /usr/bin/cmake -E cmake_progress_start 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles
 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native//CMakeFiles/progress.marks
[WARNING] make  -f CMakeFiles/Makefile2 all
[WARNING] make[1]: Entering directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] make  -f CMakeFiles/hadoop_static.dir/build.make 
CMakeFiles/hadoop_static.dir/depend
[WARNING] make[2]: Entering directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] cd 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native && 
/usr/bin/cmake -E cmake_depends "Unix Makefiles" 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/hadoop_static.dir/DependInfo.cmake
 --color=
[WARNING] Dependee 
"/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/hadoop_static.dir/DependInfo.cmake"
 is newer than depender 
"/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/hadoop_static.dir/depend.internal".
[WARNING] Dependee 
"/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeDirectoryInformation.cmake"
 is newer than depender 
"/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/hadoop_static.dir/depend.internal".
[WARNING] Scanning dependencies of target hadoop_static
[WARNING] make[2]: Leaving directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] make  -f CMakeFiles/hadoop_static.dir/build.make 
CMakeFiles/hadoop_static.dir/build
[WARNING] make[2]: Entering directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] [  2%] Building C object 
CMakeFiles/hadoop_static.dir/main/native/src/exception.c.o
[WARNING] /usr/bin/cc  
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/javah 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src/main/native/src 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src/src 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native 
-I/usr/lib/jvm/java-1.8.0/include -I/usr/lib/jvm/java-1.8.0/include/linux 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util
 -g -O2 -Wall -pthread -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -std=gnu99 -o 
CMakeFiles/hadoop_static.dir/main/native/src/exception.c.o -c 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c
[WARNING] make[2]: Leaving directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] make[1]: Leaving directory 

[jira] [Updated] (HADOOP-16764) Rewrite Python example codes using Python3

2019-12-14 Thread Kengo Seki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-16764:

Status: Patch Available  (was: Open)

> Rewrite Python example codes using Python3
> --
>
> Key: HADOOP-16764
> URL: https://issues.apache.org/jira/browse/HADOOP-16764
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16764) Rewrite Python example codes using Python3

2019-12-14 Thread Kengo Seki (Jira)
Kengo Seki created HADOOP-16764:
---

 Summary: Rewrite Python example codes using Python3
 Key: HADOOP-16764
 URL: https://issues.apache.org/jira/browse/HADOOP-16764
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Kengo Seki
Assignee: Kengo Seki






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16747) Support Python 3

2019-12-04 Thread Kengo Seki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16988548#comment-16988548
 ] 

Kengo Seki commented on HADOOP-16747:
-

Example scripts in the document such as 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/RackAwareness.html
 and 
https://hadoop.apache.org/docs/current/hadoop-streaming/HadoopStreaming.html#Hadoop_Aggregate_Package
 should be also updated ;)

> Support Python 3
> 
>
> Key: HADOOP-16747
> URL: https://issues.apache.org/jira/browse/HADOOP-16747
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> Python 2.x will become EoL at the end of 2019.
>  This is an umbrella JIRA to support Python 3 and drop Python 2.x.
> Here is the python scripts that Hadoop has:
> {noformat}
> $ find . -name "*.py" | grep -v "hadoop-submarine"
> ./hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/job_history_summary.py
> ./dev-support/determine-flaky-tests-hadoop.py
> ./dev-support/bin/checkcompatibility.py
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14315) Python example in the rack awareness document doesn't work due to bad indentation

2017-04-17 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-14315:
---

 Summary: Python example in the rack awareness document doesn't 
work due to bad indentation
 Key: HADOOP-14315
 URL: https://issues.apache.org/jira/browse/HADOOP-14315
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Kengo Seki
Assignee: Kengo Seki
Priority: Minor


Running that example fails with:

{code}
  File "example.py", line 28
address = '{0}/{1}'.format(ip, netmask)  # format 
address string so it looks like 'ip/netmask' to make netaddr work
  ^
IndentationError: expected an indented block
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305019#comment-15305019
 ] 

Kengo Seki commented on HADOOP-13193:
-

Sorry for not explaining enough about qbt, and thanks for the additional 
explanation and committing Chris!

> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Fix For: 2.8.0
>
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-13193:

Attachment: HADOOP-13193.1.patch

-01

* bump the version of Yetus to 0.3.0
* add qbt (quality build tool) command


> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-13193:

Status: Patch Available  (was: Open)

> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Attachments: HADOOP-13193.1.patch
>
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0

2016-05-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki reassigned HADOOP-13193:
---

Assignee: Kengo Seki  (was: Allen Wittenauer)

> Upgrade to Apache Yetus 0.3.0
> -
>
> Key: HADOOP-13193
> URL: https://issues.apache.org/jira/browse/HADOOP-13193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>
> Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12691) Add CSRF Filter for REST APIs to Hadoop Common

2016-01-16 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103222#comment-15103222
 ] 

Kengo Seki commented on HADOOP-12691:
-

No, your patch seems correct. I suspect it was applied with a wrong argument 
for "-p" option.

> Add CSRF Filter for REST APIs to Hadoop Common
> --
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.9.0
>
> Attachments: CSRFProtectionforRESTAPIs.pdf, HADOOP-12691-001.patch, 
> HADOOP-12691-002.patch, HADOOP-12691-003.patch
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-XSRF-Header.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12691) Add CSRF Filter for REST APIs to Hadoop Common

2016-01-16 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103219#comment-15103219
 ] 

Kengo Seki commented on HADOOP-12691:
-

Directory hadoop-common at the top level seems to be newly created by mistake.
https://github.com/apache/hadoop/tree/trunk/hadoop-common

> Add CSRF Filter for REST APIs to Hadoop Common
> --
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.9.0
>
> Attachments: CSRFProtectionforRESTAPIs.pdf, HADOOP-12691-001.patch, 
> HADOOP-12691-002.patch, HADOOP-12691-003.patch
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-XSRF-Header.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12691) Add CSRF Filter for REST APIs to Hadoop Common

2016-01-16 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103210#comment-15103210
 ] 

Kengo Seki commented on HADOOP-12691:
-

[~cnauroth], I'm afraid you seemed to put the new classes in the wrong position.

> Add CSRF Filter for REST APIs to Hadoop Common
> --
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.9.0
>
> Attachments: CSRFProtectionforRESTAPIs.pdf, HADOOP-12691-001.patch, 
> HADOOP-12691-002.patch, HADOOP-12691-003.patch
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-XSRF-Header.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12681) start-build-env.sh fails in branch-2

2015-12-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12681:

Attachment: HADOOP-12681.branch-2.00.patch

Attaching a patch. After applying this, test-patch works in docker mode on 
branch-2, as follows:

{code}
[sekikn@localhost hadoop]$ /home/sekikn/yetus/precommit/smart-apply-patch.sh 
--plugins=all ~/HADOOP-12681.branch-2.00.patch 
Processing: /home/sekikn/HADOOP-12681.branch-2.00.patch
Patch file /home/sekikn/HADOOP-12681.branch-2.00.patch copied to 
/tmp/yetus-5197.32233
Applying the patch:
Sun Dec 27 23:41:33 JST 2015
cd /home/sekikn/dev/hadoop
git apply --binary -v --stat --apply -p0 /tmp/yetus-5197.32233/patch
Applied patch dev-support/docker/Dockerfile cleanly.
 dev-support/docker/Dockerfile |4 
 1 file changed, 4 deletions(-)
[sekikn@localhost hadoop]$ /home/sekikn/yetus/precommit/test-patch.sh 
--dirty-workspace --docker --project=hadoop MAPREDUCE-6584
/tmp/yetus-4717.16695 has been created
Running in developer mode
Processing: MAPREDUCE-6584
MAPREDUCE-6584 patch is being downloaded at Sun Dec 27 23:46:06 JST 2015 from
https://issues.apache.org/jira/secure/attachment/12779516/MAPREDUCE-6584-branch-2.01.patch

(snip)

-1 overall

 _ _ __ 
|  ___|_ _(_) |_   _ _ __ ___| |
| |_ / _` | | | | | | '__/ _ \ |
|  _| (_| | | | |_| | | |  __/_|
|_|  \__,_|_|_|\__,_|_|  \___(_)



| Vote |  Subsystem |  Runtime   | Comment

|   0  |reexec  |  0m 0s | Docker mode activated. 
|  +1  |   @author  |  0m 0s | The patch does not contain any @author 
|  ||| tags.
|  -1  |test4tests  |  0m 0s | The patch doesn't appear to include any 
|  ||| new or modified tests. Please justify why
|  ||| no new tests are needed for this patch.
|  ||| Also please list what manual steps were
|  ||| performed to verify this patch.
|  +1  |mvninstall  |  6m 10s| branch-2 passed 
|  +1  |   compile  |  0m 21s| branch-2 passed 
|  +1  |   mvnsite  |  0m 27s| branch-2 passed 
|  +1  |mvneclipse  |  0m 12s| branch-2 passed 
|  +1  |   javadoc  |  0m 21s| branch-2 passed 
|  +1  |mvninstall  |  0m 24s| the patch passed 
|  +1  |   compile  |  0m 21s| the patch passed 
|  +1  | javac  |  0m 21s| the patch passed 
|  +1  |   mvnsite  |  0m 28s| the patch passed 
|  +1  |mvneclipse  |  0m 11s| the patch passed 
|  +1  |whitespace  |  0m 0s | Patch has no whitespace issues. 
|  +1  |   xml  |  0m 0s | The patch has no ill-formed XML file. 
|  +1  |   javadoc  |  0m 21s| the patch passed 
|  +1  |asflicense  |  0m 18s| Patch does not generate ASF License 
|  ||| warnings.
|  ||  10m 2s| 


|| Subsystem || Report/Notes ||

| Docker | Client=1.7.1 Server=1.7.1 Image:yetus/hadoop:577e74f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779516/MAPREDUCE-6584-branch-2.01.patch
 |
| JIRA Issue | MAPREDUCE-6584 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 7ba42874cad1 2.6.32-573.8.1.el6.x86_64 #1 SMP Tue Nov 10 
18:01:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 42160d3 |
| Default Java | 1.7.0_80 |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
| Max memory used | 75MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
{code}

> start-build-env.sh fails in branch-2
> 
>
> Key: HADOOP-12681
> URL: https://issues.apache.org/jira/browse/HADOOP-12681
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Blocker
>  Labels: newbie
> Attachments: HADOOP-12681.branch-2.00.patch
>
>
> start-build-env.sh fails in branch-2. Found in MAPREDUCE-6584. 
> https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6236/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12681) start-build-env.sh fails in branch-2

2015-12-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12681:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> start-build-env.sh fails in branch-2
> 
>
> Key: HADOOP-12681
> URL: https://issues.apache.org/jira/browse/HADOOP-12681
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira AJISAKA
>Assignee: Kengo Seki
>Priority: Blocker
>  Labels: newbie
> Attachments: HADOOP-12681.branch-2.00.patch
>
>
> start-build-env.sh fails in branch-2. Found in MAPREDUCE-6584. 
> https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6236/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12273) releasedocmaker.py fails with stacktrace if --project option is not specified

2015-12-17 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15063263#comment-15063263
 ] 

Kengo Seki commented on HADOOP-12273:
-

Hi [~lewuathe], the development of releasedocmaker has been stopped on Hadoop 
project and migrated to Yetus. HADOOP-12111 branch was already removed.
You could backport this patch to trunk or branch-2, but I strongly recommend 
you to use the one included in [the latest Yetus 
release|http://yetus.apache.org/downloads/]. Many improvements and bug-fixes 
are applied.

> releasedocmaker.py fails with stacktrace if --project option is not specified
> -
>
> Key: HADOOP-12273
> URL: https://issues.apache.org/jira/browse/HADOOP-12273
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12273.HADOOP-12111.00.patch, 
> HADOOP-12273.HADOOP-12111.01.patch
>
>
> It should show its usage instead. 
> {code}
> [sekikn@mobile hadoop]$ dev-support/releasedocmaker.py --version 3.0.0
> Traceback (most recent call last):
>   File "dev-support/releasedocmaker.py", line 580, in 
> main()
>   File "dev-support/releasedocmaker.py", line 424, in main
> title=projects[0]
> TypeError: 'NoneType' object has no attribute '__getitem__'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9637) Adding Native Fstat for Windows as needed by YARN

2015-10-25 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-9637:
---
Assignee: Chuan Liu  (was: Kengo Seki)

> Adding Native Fstat for Windows as needed by YARN
> -
>
> Key: HADOOP-9637
> URL: https://issues.apache.org/jira/browse/HADOOP-9637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 3.0.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9637-trunk.2.patch, HADOOP-9637-trunk.3.patch, 
> HADOOP-9637-trunk.patch
>
>
> In the YARN, nodemanager need to enforce the log file can only be accessed by 
> the owner. At various places, {{SecureIOUtils.openForRead()}} was called to 
> enforce this check. We don't have {{NativeIO.Posix.getFstat()}} used by 
> {{SecureIOUtils.openForRead()}} on Windows, and this make the check fail on 
> Windows. The YARN unit tests 
> TestAggregatedLogFormat.testContainerLogsFileAccess and 
> TestContainerLogsPage.testContainerLogPageAccess fail on Windows because of 
> this.
> The JIRA try to provide a Windows implementation of 
> {{NativeIO.Posix.getFstat()}}.
> TestAggregatedLogFormat.testContainerLogsFileAccess test case fails on 
> Windows. The test case try to simulate a situation where first log file is 
> owned by different user (probably symlink) and second one by the user itself. 
> In this situation, the attempt to try to aggregate the logs should fail with 
> the error message "Owner ... for path ... did not match expected owner ...".
> The check on file owner happens at {{AggregatedLogFormat.write()}} method. 
> The method calls {{SecureIOUtils.openForRead()}} to read the log files before 
> writing out to the OutputStream.
> {{SecureIOUtils.openForRead()}} use {{NativeIO.Posix.getFstat()}} to get the 
> file owner and group. We don't have {{NativeIO.Posix.getFstat()}} 
> implementation on Windows; thus, the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-9637) Adding Native Fstat for Windows as needed by YARN

2015-10-25 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki reassigned HADOOP-9637:
--

Assignee: Kengo Seki  (was: Chuan Liu)

> Adding Native Fstat for Windows as needed by YARN
> -
>
> Key: HADOOP-9637
> URL: https://issues.apache.org/jira/browse/HADOOP-9637
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 3.0.0
>Reporter: Chuan Liu
>Assignee: Kengo Seki
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9637-trunk.2.patch, HADOOP-9637-trunk.3.patch, 
> HADOOP-9637-trunk.patch
>
>
> In the YARN, nodemanager need to enforce the log file can only be accessed by 
> the owner. At various places, {{SecureIOUtils.openForRead()}} was called to 
> enforce this check. We don't have {{NativeIO.Posix.getFstat()}} used by 
> {{SecureIOUtils.openForRead()}} on Windows, and this make the check fail on 
> Windows. The YARN unit tests 
> TestAggregatedLogFormat.testContainerLogsFileAccess and 
> TestContainerLogsPage.testContainerLogPageAccess fail on Windows because of 
> this.
> The JIRA try to provide a Windows implementation of 
> {{NativeIO.Posix.getFstat()}}.
> TestAggregatedLogFormat.testContainerLogsFileAccess test case fails on 
> Windows. The test case try to simulate a situation where first log file is 
> owned by different user (probably symlink) and second one by the user itself. 
> In this situation, the attempt to try to aggregate the logs should fail with 
> the error message "Owner ... for path ... did not match expected owner ...".
> The check on file owner happens at {{AggregatedLogFormat.write()}} method. 
> The method calls {{SecureIOUtils.openForRead()}} to read the log files before 
> writing out to the OutputStream.
> {{SecureIOUtils.openForRead()}} use {{NativeIO.Posix.getFstat()}} to get the 
> file owner and group. We don't have {{NativeIO.Posix.getFstat()}} 
> implementation on Windows; thus, the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-22 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14902886#comment-14902886
 ] 

Kengo Seki commented on HADOOP-4258:


Sorry [~jagadesh.kiran], I'm on vacation until Sep. 27th. I'll review your 
patch next Monday.

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>Assignee: Jagadesh Kiran N
>  Labels: test-patch
> Attachments: HADOOP-4258.001.patch, 
> HADOOP-4258.HADOOP-12111.00.patch, HADOOP-4258.HADOOP-12111.01.patch, 
> HADOOP-4258.HADOOP-12111.02.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-15 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14746778#comment-14746778
 ] 

Kengo Seki commented on HADOOP-4258:


Thanks [~jagadesh.kiran], some comments:

* Shortening only the filename is not sufficient. We must shorten 
"duplicatednames" in the file, because the arguement of add_test is used as the 
plugin name. See [Yetus precommit document about plugin 
name|https://github.com/apache/hadoop/blob/HADOOP-12111/dev-support/docs/precommit-advanced.md#common-plug-in-functions].

* The following is probably different from what Allen originally meant. This 
just sorts changed_files, while {{uniq -id}} outputs only duplicated lines 
ignoring case.

{code}
IFS=$'\n' changed_files=($(sort --ignore-case <<<"${CHANGED_FILES[*]}"))
{code}

* The following line will fail because the closing brace is missing.

{code}
for ((j=i+1; j < ${#changed_files[@]; j++)) {
{code}

* Address shellcheck and whitespace warnings.

To make sure, the comments which are not addressed in this patch, except for 
mentioned above:

bq. the problem that Owen O'Malley is actually talking about is a problem where 
a new file was added because it conflicted with a pre-existing file in the 
source tree. This patch would miss that situation.
bq. $a and $b should be changed, like "$first_lower", "$second_lower".
bq. if HADOOP-12257 gets committed, this should use _precheck instead of 
_postcheckout.


> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>Assignee: Jagadesh Kiran N
>  Labels: test-patch
> Attachments: HADOOP-4258.001.patch, 
> HADOOP-4258.HADOOP-12111.00.patch, HADOOP-4258.HADOOP-12111.01.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-15 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14746465#comment-14746465
 ] 

Kengo Seki commented on HADOOP-12399:
-

Thanks [~jagadesh.kiran], +1 non-binding.

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12399.HADOOP-12111.00.patch, 
> HADOOP-12399.HADOOP-12111.01.patch, HADOOP-12399.HADOOP-12111.02.patch
>
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12393) asflicense is easily tricked

2015-09-14 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744485#comment-14744485
 ] 

Kengo Seki commented on HADOOP-12393:
-

Got it. Thanks!

> asflicense is easily tricked
> 
>
> Key: HADOOP-12393
> URL: https://issues.apache.org/jira/browse/HADOOP-12393
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> asflicense needs to make sure that it gets at least one report file instead 
> of assuming nothing is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12393) asflicense is easily tricked

2015-09-14 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744468#comment-14744468
 ] 

Kengo Seki commented on HADOOP-12393:
-

Let me confirm, my understanding is:

* asflicense should fail for Kafka, because its {{gradle rat}} generates only 
rat-report.html and rat-report.xml for now.
* But because asflicense judges from only ant/mvn/gradle's status, it succeeds 
even though rat-report.txt does not exist.
* So we should fix the plugin to check at least one of the assumed output 
exists.

Is this what you meant?

> asflicense is easily tricked
> 
>
> Key: HADOOP-12393
> URL: https://issues.apache.org/jira/browse/HADOOP-12393
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> asflicense needs to make sure that it gets at least one report file instead 
> of assuming nothing is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12395) big top personality should skip asflicense

2015-09-13 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki resolved HADOOP-12395.
-
Resolution: Won't Fix

Closing this because BIGTOP-2020 has been merged and {{gradlew rat}} works now.

> big top personality should skip asflicense
> --
>
> Key: HADOOP-12395
> URL: https://issues.apache.org/jira/browse/HADOOP-12395
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> Bigtop doesn't support RAT checks as part of gradle, so it should probably 
> skip them for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12401) Wrong function names in test-patch bigtop personality

2015-09-13 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14742592#comment-14742592
 ] 

Kengo Seki commented on HADOOP-12401:
-

It's enough for {{gradlew toolchain}} to be called only once and it should be 
called before other gradle commands. So _precheck is suitable, I think.

> Wrong function names in test-patch bigtop personality
> -
>
> Key: HADOOP-12401
> URL: https://issues.apache.org/jira/browse/HADOOP-12401
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> In dev-support/personality/bigtop.sh:
> {code}
>  51 function bigtop_precheck_postinstall
>  52 {
>  53   if [[ ${BIGTOP_PUPPETSETUP} = "true" ]]; then
>  54 pushd "${BASEDIR}" >/dev/null
>  55 echo_and_redirect "${PATCH_DIR}/bigtop-branch-toolchain.txt" 
> "${GRADLEW}" toolchain
>  56 popd >/dev/null
>  57   fi
>  58 }
>  59 
>  60 function bigtop_postapply_postinstall
>  61 {
>  62   if [[ ${BIGTOP_PUPPETSETUP} = "true" ]]; then
>  63 pushd "${BASEDIR}" >/dev/null
>  64 echo_and_redirect "${PATCH_DIR}/bigtop-patch-toolchain.txt" 
> "${GRADLEW}" toolchain
>  65 popd >/dev/null
>  66   fi
>  67 }
> {code}
> Their names are not proper for test-patch plugin callback functions. Maybe it 
> should be like:
> {code}
> function bigtop_precompile
> {
>   declare codebase=$1
>   if [[ ${BIGTOP_PUPPETSETUP} = "true" && ( ${codebase} = "branch" || 
> ${codebase} = "patch" ) ]]; then
> pushd "${BASEDIR}" >/dev/null
> echo_and_redirect "${PATCH_DIR}/bigtop-${codebase}-toolchain.txt" 
> "${GRADLEW}" toolchain
> popd >/dev/null
>   fi  
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-13 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14742560#comment-14742560
 ] 

Kengo Seki commented on HADOOP-12399:
-

Not BIGTOP_PUPPETSETUP, but bigtop-puppet. Just removing "setup" from the 
original code is enough.

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12399.HADOOP-12111.00.patch, 
> HADOOP-12399.HADOOP-12111.01.patch
>
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-09-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12111:

Assignee: (was: Kengo Seki)

> [Umbrella] Split test-patch off into its own TLP
> 
>
> Key: HADOOP-12111
> URL: https://issues.apache.org/jira/browse/HADOOP-12111
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> Given test-patch's tendency to get forked into a variety of different 
> projects, it makes a lot of sense to make an Apache TLP so that everyone can 
> benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-09-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki reassigned HADOOP-12111:
---

Assignee: Kengo Seki

> [Umbrella] Split test-patch off into its own TLP
> 
>
> Key: HADOOP-12111
> URL: https://issues.apache.org/jira/browse/HADOOP-12111
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>
> Given test-patch's tendency to get forked into a variety of different 
> projects, it makes a lot of sense to make an Apache TLP so that everyone can 
> benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12400) Wrong comment for scaladoc_rebuild function in test-patch scala plugin

2015-09-10 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739966#comment-14739966
 ] 

Kengo Seki commented on HADOOP-12400:
-

Thanks [~jagadesh.kiran], but I think it's enough to simply replace "JavaDoc" 
with "ScalaDoc" in the original comment. Because "_rebuild" is just a hook name 
and the new comment "Rebuild scala doc" is something inaccurate. This function 
works as the original comment says, via  calling generic_pre_handler and 
generic_post_handler.

> Wrong comment for scaladoc_rebuild function in test-patch scala plugin
> --
>
> Key: HADOOP-12400
> URL: https://issues.apache.org/jira/browse/HADOOP-12400
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12400.HADOOP-12111.00.patch, 
> HADOOP-12400.HADOOP-12111.01.patch
>
>
> {code}
>  62 ## @description  Count and compare the number of JavaDoc warnings pre- 
> and post- patch
>  63 ## @audience private
>  64 ## @stabilityevolving
>  65 ## @replaceable  no
>  66 ## @return   0 on success
>  67 ## @return   1 on failure
>  68 function scaladoc_rebuild
> {code}
> s/JavaDoc/ScalaDoc/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12397) Incomplete comment for test-patch compile_cycle function

2015-09-10 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739941#comment-14739941
 ] 

Kengo Seki commented on HADOOP-12397:
-

Thanks [~jagadesh.kiran], could you change the enumeration order in the comment 
to reflect the actual invocation order?

> Incomplete comment for test-patch compile_cycle function
> 
>
> Key: HADOOP-12397
> URL: https://issues.apache.org/jira/browse/HADOOP-12397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12397.HADOOP-12111.00.patch
>
>
> Its comment says:
> {code}
> ## @description  This will callout to _precompile, compile, and _postcompile
> {code}
> but it calls _rebuild also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12397) Incomplete comment for test-patch compile_cycle function

2015-09-10 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739935#comment-14739935
 ] 

Kengo Seki commented on HADOOP-12397:
-

Allen, the architecture and advanced document seem to mention the _rebuild 
phase already. Do you have something to worry about?

> Incomplete comment for test-patch compile_cycle function
> 
>
> Key: HADOOP-12397
> URL: https://issues.apache.org/jira/browse/HADOOP-12397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12397.HADOOP-12111.00.patch
>
>
> Its comment says:
> {code}
> ## @description  This will callout to _precompile, compile, and _postcompile
> {code}
> but it calls _rebuild also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12398) filefilter function in test-patch flink personality is never called

2015-09-10 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739919#comment-14739919
 ] 

Kengo Seki commented on HADOOP-12398:
-

[~jagadesh.kiran] Yes, that function is intended to be called from 
test-patch.sh. But because of its name, it won't be called. [As yetus precommit 
document 
explains|https://github.com/apache/hadoop/blob/HADOOP-12111/dev-support/docs/precommit-advanced.md#test-plug-ins],
 plugin name is passed to test-patch.sh via add_plugin function. In this case, 
the plugin name is "flinklib":

{code}
 26 add_plugin flinklib
{code}

so the function name should be  "flinklib_filefilter". Could you rename the 
function rather than remove it? Thanks.

> filefilter function in test-patch flink personality is never called
> ---
>
> Key: HADOOP-12398
> URL: https://issues.apache.org/jira/browse/HADOOP-12398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>  Labels: newbie
> Attachments: HADOOP-12398.HADOOP-12111.00.patch
>
>
> Wrong function name.
> {code}
>  28 function fliblib_filefilter
>  29 {
>  30   local filename=$1
>  31 
>  32   if [[ ${filename} =~ \.java$
>  33 || ${filename} =~ \.scala$
>  34 || ${filename} =~ pom.xml$ ]]; then
>  35 add_test flinklib
>  36   fi
>  37 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12398) filefilter function in test-patch flink personality is never called

2015-09-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12398:

Labels: newbie  (was: )

> filefilter function in test-patch flink personality is never called
> ---
>
> Key: HADOOP-12398
> URL: https://issues.apache.org/jira/browse/HADOOP-12398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>  Labels: newbie
>
> Wrong function name.
> {code}
>  28 function fliblib_filefilter
>  29 {
>  30   local filename=$1
>  31 
>  32   if [[ ${filename} =~ \.java$
>  33 || ${filename} =~ \.scala$
>  34 || ${filename} =~ pom.xml$ ]]; then
>  35 add_test flinklib
>  36   fi
>  37 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12399:

Labels: newbie  (was: )

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Priority: Minor
>  Labels: newbie
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12397) Incomplete comment for test-patch compile_cycle function

2015-09-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12397:

Labels: newbie  (was: )

> Incomplete comment for test-patch compile_cycle function
> 
>
> Key: HADOOP-12397
> URL: https://issues.apache.org/jira/browse/HADOOP-12397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Priority: Trivial
>  Labels: newbie
>
> Its comment says:
> {code}
> ## @description  This will callout to _precompile, compile, and _postcompile
> {code}
> but it calls _rebuild also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12401) Wrong function names in test-patch bigtop personality

2015-09-10 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12401:
---

 Summary: Wrong function names in test-patch bigtop personality
 Key: HADOOP-12401
 URL: https://issues.apache.org/jira/browse/HADOOP-12401
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki


In dev-support/personality/bigtop.sh:

{code}
 51 function bigtop_precheck_postinstall
 52 {
 53   if [[ ${BIGTOP_PUPPETSETUP} = "true" ]]; then
 54 pushd "${BASEDIR}" >/dev/null
 55 echo_and_redirect "${PATCH_DIR}/bigtop-branch-toolchain.txt" 
"${GRADLEW}" toolchain
 56 popd >/dev/null
 57   fi
 58 }
 59 
 60 function bigtop_postapply_postinstall
 61 {
 62   if [[ ${BIGTOP_PUPPETSETUP} = "true" ]]; then
 63 pushd "${BASEDIR}" >/dev/null
 64 echo_and_redirect "${PATCH_DIR}/bigtop-patch-toolchain.txt" 
"${GRADLEW}" toolchain
 65 popd >/dev/null
 66   fi
 67 }
{code}

Their names are not proper for test-patch plugin callback functions. Maybe it 
should be like:

{code}
function bigtop_precompile
{
  declare codebase=$1
  if [[ ${BIGTOP_PUPPETSETUP} = "true" && ( ${codebase} = "branch" || 
${codebase} = "patch" ) ]]; then
pushd "${BASEDIR}" >/dev/null
echo_and_redirect "${PATCH_DIR}/bigtop-${codebase}-toolchain.txt" 
"${GRADLEW}" toolchain
popd >/dev/null
  fi  
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12400) Wrong comment for scaladoc_rebuild function in test-patch scala plugin

2015-09-10 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12400:
---

 Summary: Wrong comment for scaladoc_rebuild function in test-patch 
scala plugin
 Key: HADOOP-12400
 URL: https://issues.apache.org/jira/browse/HADOOP-12400
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Priority: Trivial


{code}
 62 ## @description  Count and compare the number of JavaDoc warnings pre- and 
post- patch
 63 ## @audience private
 64 ## @stabilityevolving
 65 ## @replaceable  no
 66 ## @return   0 on success
 67 ## @return   1 on failure
 68 function scaladoc_rebuild
{code}

s/JavaDoc/ScalaDoc/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-10 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12399:
---

 Summary: Wrong help messages in some test-patch plugins
 Key: HADOOP-12399
 URL: https://issues.apache.org/jira/browse/HADOOP-12399
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Priority: Minor


dev-support/personality/bigtop.sh:

{code}
 32 function bigtop_usage
 33 {
 34   echo "Bigtop specific:"
 35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
(needs sudo to root)"
 36 }
{code}

s/bigtop-puppetsetup/bigtop-puppet/.

dev-support/test-patch.d/gradle.sh:

{code}
 21 function gradle_usage
 22 {
 23   echo "gradle specific:"
 24   echo "--gradle-cmd=The 'gradle' command to use (default 
'gradle')"
 25   echo "--gradlew-cmd=The 'gradle' command to use (default 
'basedir/gradlew')"
 26 }
{code}

s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12398) filefilter function is never called in test-patch flink personality

2015-09-10 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12398:
---

 Summary: filefilter function is never called in test-patch flink 
personality
 Key: HADOOP-12398
 URL: https://issues.apache.org/jira/browse/HADOOP-12398
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki


Wrong function name.

{code}
 28 function fliblib_filefilter
 29 {
 30   local filename=$1
 31 
 32   if [[ ${filename} =~ \.java$
 33 || ${filename} =~ \.scala$
 34 || ${filename} =~ pom.xml$ ]]; then
 35 add_test flinklib
 36   fi
 37 }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12398) filefilter function in test-patch flink personality is never called

2015-09-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12398:

Summary: filefilter function in test-patch flink personality is never 
called  (was: filefilter function is never called in test-patch flink 
personality)

> filefilter function in test-patch flink personality is never called
> ---
>
> Key: HADOOP-12398
> URL: https://issues.apache.org/jira/browse/HADOOP-12398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> Wrong function name.
> {code}
>  28 function fliblib_filefilter
>  29 {
>  30   local filename=$1
>  31 
>  32   if [[ ${filename} =~ \.java$
>  33 || ${filename} =~ \.scala$
>  34 || ${filename} =~ pom.xml$ ]]; then
>  35 add_test flinklib
>  36   fi
>  37 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12397) Incomplete comment for test-patch compile_cycle function

2015-09-10 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12397:
---

 Summary: Incomplete comment for test-patch compile_cycle function
 Key: HADOOP-12397
 URL: https://issues.apache.org/jira/browse/HADOOP-12397
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Priority: Trivial


Its comment says:

{code}
## @description  This will callout to _precompile, compile, and _postcompile
{code}

but it calls _rebuild also.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12233) if CHANGED_FILES is corrupt, find_changed_modules never returns

2015-09-09 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12233:

Status: Patch Available  (was: Reopened)

> if CHANGED_FILES is corrupt, find_changed_modules never returns
> ---
>
> Key: HADOOP-12233
> URL: https://issues.apache.org/jira/browse/HADOOP-12233
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12233.HADOOP-12111.00.patch, 
> HADOOP-12233.HADOOP-12111.01.patch, HADOOP-12233.HADOOP-12111.02.patch
>
>
> In building some unit tests, did a negative tests and hit this condition.  We 
> should put a limit on how many times we loop in the find_x_dirs code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12233) if CHANGED_FILES is corrupt, find_changed_modules never returns

2015-09-09 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12233:

Attachment: HADOOP-12233.HADOOP-12111.02.patch

We maybe overthought it. Is the 02.patch enough to solve this problem?

> if CHANGED_FILES is corrupt, find_changed_modules never returns
> ---
>
> Key: HADOOP-12233
> URL: https://issues.apache.org/jira/browse/HADOOP-12233
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12233.HADOOP-12111.00.patch, 
> HADOOP-12233.HADOOP-12111.01.patch, HADOOP-12233.HADOOP-12111.02.patch
>
>
> In building some unit tests, did a negative tests and hit this condition.  We 
> should put a limit on how many times we loop in the find_x_dirs code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12257) rework build tool support; add gradle; add scala

2015-09-09 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14736852#comment-14736852
 ] 

Kengo Seki commented on HADOOP-12257:
-

In addition to Sean's comments, I ran the updated test-patch for all 9 projects 
currently supported, and confirmed the results correctly reflect the problems 
which the patches have. Here's some examples.

BigTop:

{code}
| Vote |  Subsystem |  Runtime   | Comment

|  +1  |   @author  |  0m 00s| The patch does not contain any @author 
|  ||| tags.
|  -1  |test4tests  |  0m 00s| The patch doesn't appear to include any 
|  ||| new or modified tests. Please justify why
|  ||| no new tests are needed for this patch.
|  ||| Also please list what manual steps were
|  ||| performed to verify this patch.
|  +1  |gradleboot  |  0m 16s| master passed 
|  +1  |   compile  |  0m 09s| master passed 
|  -1  |  scaladoc  |  0m 09s| root in master failed. 
|  +1  |gradleboot  |  0m 14s| the patch passed 
|  +1  |   compile  |  0m 10s| the patch passed 
|  +1  |scalac  |  0m 10s| the patch passed 
|  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
|  -1  |  scaladoc  |  0m 10s| root in the patch failed. 
|  -1  |asflicense  |  0m 10s| root in the patch failed. 
|  ||  1m 47s| 


|| Subsystem || Report/Notes ||

| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12754699/BIGTOP-2019.1.patch |
| JIRA Issue | BIGTOP-2019 |
| git revision | master / 5efdbfb |
| Optional Tests | asflicense scalac  scaladoc  unit  compile  |
| uname | Darwin mobile.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 
02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 |
| Build tool | gradle |
| Personality | /Users/sekikn/hadoop/dev-support/personality/bigtop.sh |
| Default Java | 1.7.0_80 |
| scaladoc | /private/tmp/test-patch-bigtop/19287/branch-scaladoc-root.txt |
| scaladoc | /private/tmp/test-patch-bigtop/19287/patch-scaladoc-root.txt |
| asflicense | /private/tmp/test-patch-bigtop/19287/patch-asflicense-root.txt |
{code}

Kafka: failed to compile

{code}
| Vote |  Subsystem |  Runtime   | Comment

|  +1  |   @author  |  0m 00s| The patch does not contain any @author 
|  ||| tags.
|  +1  |test4tests  |  0m 00s| The patch appears to include 11 new or 
|  ||| modified test files.
|  +1  |gradleboot  |  3m 47s| trunk passed 
|  +1  |   compile  |  20m 12s   | trunk passed 
|  +1  |   javadoc  |  3m 37s| trunk passed 
|  +1  |  scaladoc  |  10m 01s   | trunk passed 
|  +1  |gradleboot  |  1m 55s| the patch passed 
|  -1  |   compile  |  10m 53s   | root in the patch failed. 
|  -1  | javac  |  10m 53s   | root in the patch failed. 
|  -1  |scalac  |  10m 53s   | root in the patch failed. 
|  +1  |whitespace  |  0m 01s| Patch has no whitespace issues. 
|  -1  |   javadoc  |  1m 09s| root in the patch failed. 
|  +1  |  scaladoc  |  2m 45s| the patch passed 
|  +1  |asflicense  |  0m 55s| Patch does not generate ASF License 
|  ||| warnings.
|  ||  62m 11s   | 


|| Subsystem || Report/Notes ||

| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12754305/KAFKA-2120_2015-09-04_17%3A49%3A01.patch
 |
| JIRA Issue | KAFKA-2120 |
| git revision | trunk / b8b1bca |
| Optional Tests | asflicense javac  javadoc  unit  findbugs  compile  scalac  
scaladoc  |
| uname | Darwin mobile.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 
02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 |
| Build tool | gradle |
| Personality | /Users/sekikn/hadoop/dev-support/personality/kafka.sh |
| Default Java | 1.7.0_80 |
| compile | /private/tmp/test-patch-kafka/38835/patch-compile-root.txt |
| javac | /private/tmp/test-patch-kafka/38835/patch-compile-root.txt |
| scalac | /private/tmp/test-patch-kafka/38835/patch-compile-root.txt |
| findbugs | not supported by the gradle plugin |
| javadoc | /private/tmp/test-patch-kafka/38835/patch-javadoc-root.txt |
{code}

Samza: succeeded

{code}

| Vote |  Subsystem |  Runtime   | Comment

|  +1  |   @author  |  0m 00s| The patch 

[jira] [Commented] (HADOOP-12341) patch file confuses test-patch (date format problems)

2015-09-07 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14734248#comment-14734248
 ] 

Kengo Seki commented on HADOOP-12341:
-

bq. Thus that code was removed.

Make sense.

bq. Let's effectivelymerge smart-apply-patch and test-patch.

Sounds good! By doing so, we can commonize some functions handling a patch and 
remove the dependency on smart-apply-patch from test-patch.
But in my understanding, this change itself doesn't solve the original problem 
directly, is this right?


> patch file confuses test-patch (date format problems)
> -
>
> Key: HADOOP-12341
> URL: https://issues.apache.org/jira/browse/HADOOP-12341
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12326.002.patch, 
> HADOOP-12375.HADOOP-12111.02.patch
>
>
> This was attached to HADOOP-12326 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12383) Document python version releasedocmaker supports

2015-09-07 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12383:
---

 Summary: Document python version releasedocmaker supports
 Key: HADOOP-12383
 URL: https://issues.apache.org/jira/browse/HADOOP-12383
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Priority: Minor


Maybe 2.5+?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12277) releasedocmaker index mode should create a readme.md in addition to a index.md

2015-09-07 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14733816#comment-14733816
 ] 

Kengo Seki commented on HADOOP-12277:
-

Small nits:

* 02.patch introduces 4 new pylint warnings. The first two should be fixed. I 
think the last two are ignorable because the source looks simple enough.

{code}
[sekikn@mobile hadoop]$ cat 
/private/tmp/test-patch-yetus/10595/diff-patch-pylint.txt
dev-support/releasedocmaker.py:106: [C0326(bad-whitespace), ] Exactly one space 
required after comma
versions.sort(key=LooseVersion,reverse=True)
  ^
dev-support/releasedocmaker.py:119: [C0326(bad-whitespace), ] Exactly one space 
required after comma
versions.sort(key=LooseVersion,reverse=True)
  ^
dev-support/releasedocmaker.py:117: [C0111(missing-docstring), buildreadme] 
Missing function docstring
dev-support/releasedocmaker.py:395: [R0915(too-many-statements), main] Too many 
statements (146/50)
{code}

* {{indexfile.close()}} after with statement is probably omittable.

{code}
104 def buildindex(title, asf_license):
105 versions = glob("[0-9]*.[0-9]*.[0-9]*")
106 versions.sort(key=LooseVersion,reverse=True)
107 with open("index.md", "w") as indexfile:
108 if asf_license is True:
109 indexfile.write(ASF_LICENSE)
110 for version in versions:
111 indexfile.write("* %s v%s\n" % (title, version))
112 for k in ("Changes", "Release Notes"):
113 indexfile.write("* [%s](%s/%s.%s.html)\n" \
114 % (k, version, k.upper().replace(" ", ""), version))
115 indexfile.close()
116 
117 def buildreadme(title, asf_license):
118 versions = glob("[0-9]*.[0-9]*.[0-9]*")
119 versions.sort(key=LooseVersion,reverse=True)
120 with open("README.md", "w") as indexfile:
121 if asf_license is True:
122 indexfile.write(ASF_LICENSE)
123 for version in versions:
124 indexfile.write("* %s v%s\n" % (title, version))
125 for k in ("Changes", "Release Notes"):
126 indexfile.write("* [%s](%s/%s.%s.md)\n" \
127 % (k, version, k.upper().replace(" ", ""), version))
128 indexfile.close()
{code}

Otherwise, +1 non-binding.

> releasedocmaker index mode should create a readme.md in addition to a index.md
> --
>
> Key: HADOOP-12277
> URL: https://issues.apache.org/jira/browse/HADOOP-12277
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-12277.HADOOP-12111.00.patch, 
> HADOOP-12277.HADOOP-12111.01.patch, HADOOP-12277.HADOOP-12111.02.patch
>
>
> The content should be the same, however, rather than a mvn site-compatible 
> links, it should be github-compatible markdown links.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12380) Wrong grep command invocation in github_find_jira_title

2015-09-07 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14733769#comment-14733769
 ] 

Kengo Seki commented on HADOOP-12380:
-

Thanks [~brahmareddy], +1 non-binding.

{code}
[sekikn@mobile hadoop]$ bash -x dev-support/test-patch.sh --basedir=../pig 
--project=pig --grep-cmd=/usr/bin/grep 18

(snip)

+ github_find_jira_title
+ declare title
+ declare maybe
+ declare retval
+ [[ ! -f /private/tmp/test-patch-pig/2968/github-pull.json ]]
++ /usr/bin/grep title /private/tmp/test-patch-pig/2968/github-pull.json
++ cut -f4 '-d"'
+ title=PIG-4373

(snip)
{code}

> Wrong grep command invocation in github_find_jira_title
> ---
>
> Key: HADOOP-12380
> URL: https://issues.apache.org/jira/browse/HADOOP-12380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Brahma Reddy Battula
>  Labels: newbie
> Attachments: HADOOP-12380.HADOOP-12111.01.patch
>
>
> {code}
> 139   title=$(GREP title "${PATCH_DIR}/github-pull.json" \
> 140 | cut -f4 -d\")
> {code}
> should be {{title=$($\{GREP} ...)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12341) patch file confuses test-patch (date format problems)

2015-09-06 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14733238#comment-14733238
 ] 

Kengo Seki commented on HADOOP-12341:
-

To resolve this issue, I have some questions:

* test-patch had skipped file type guessing if the file has .patch suffix, but 
this behaviour is changed on HADOOP-12129. Is there any concerns if we return 
this behaviour back?

* Or, is there any concerns if we match the following second and third patterns 
against whole lines in the file?

{code}
1662   fileOutput=$(head -n 1 "${patch}" | "${GREP}" -E "^(From [a-z0-9]* Mon 
Sep 17 00:00:00 2001)|(diff .*)|(Index: .*)$")
{code}

* Sean said:

bq. we detect git patches made with format-patch, but we only ever use git 
apply, rather than the git am needed in that case.

but I tried some patches made with format-patch, and {{git apply}} worked on 
all of them. If we can't find any example immediately, can we postpone the fix 
for this case?

> patch file confuses test-patch (date format problems)
> -
>
> Key: HADOOP-12341
> URL: https://issues.apache.org/jira/browse/HADOOP-12341
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12326.002.patch, 
> HADOOP-12375.HADOOP-12111.02.patch
>
>
> This was attached to HADOOP-12326 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12380) Wrong grep command invocation in github_find_jira_title

2015-09-06 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12380:
---

 Summary: Wrong grep command invocation in github_find_jira_title
 Key: HADOOP-12380
 URL: https://issues.apache.org/jira/browse/HADOOP-12380
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki


{code}
139   title=$(GREP title "${PATCH_DIR}/github-pull.json" \
140 | cut -f4 -d\")
{code}

should be {{title=$($\{GREP} ...)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-06 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14732415#comment-14732415
 ] 

Kengo Seki commented on HADOOP-4258:


Thanks [~jagadesh.kiran], some comments:

* K and L are not so readable too. How about "$first", "$second" or something? 
Also, $a and $b should be changed, like "$first_lower", "$second_lower".

* Address other Allen's comments. About test-patch plugin hooks such as 
postcheckout or preapply, see [yetus precommit 
document|https://github.com/apache/hadoop/blob/HADOOP-12111/dev-support/docs/precommit-advanced.md#test-plug-ins].

* This code

{code}
 34   declare -a changed_files
 35   i=0
 36   # shellcheck disable=SC2153
 37   while read line; do
 38 changed_files[${i}]="${line}"
 39 ((i=i+1))
 40   done < <(echo "${CHANGED_FILES}")
{code}

is intended to store newline-separated strings in an array, but very verbose 
(sorry, my fault). Instead,

{code}
declare -a changed_files
changed_files=(${CHANGED_FILES})
{code}

should work.

* And, please don't forget to shorten the plugin name. ;)

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>Assignee: Jagadesh Kiran N
>  Labels: test-patch
> Attachments: HADOOP-4258.001.patch, HADOOP-4258.HADOOP-12111.00.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12246) Apache Hadoop should be listed under "big-data" and "hadoop" categories

2015-09-04 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731798#comment-14731798
 ] 

Kengo Seki commented on HADOOP-12246:
-

Hadoop uses git for the main source and the precommit assumes submitted patches 
are meant for the git repository.
I'm a relative newcomer to hadoop development and not familiar with the ASF 
infra or doap.rdf, but experts will advise you in time.
Anyway you can ignore the precommit error message.

> Apache Hadoop should be listed under "big-data" and "hadoop" categories
> ---
>
> Key: HADOOP-12246
> URL: https://issues.apache.org/jira/browse/HADOOP-12246
> Project: Hadoop Common
>  Issue Type: Task
>  Components: site
>Reporter: Ajoy Bhatia
>Assignee: Ajoy Bhatia
>Priority: Trivial
>  Labels: categories
> Fix For: site
>
> Attachments: site.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> In the "Projects by category" list on 
> https://projects.apache.org/projects.html?category , Apache Hadoop is listed 
> in the database category only.
> Apache Hadoop project should also be listed under big-data and hadoop 
> categories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12349) Misleading debug message in generic_locate_patch

2015-09-04 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731296#comment-14731296
 ] 

Kengo Seki commented on HADOOP-12349:
-

Thanks, +1 non-binding.

> Misleading debug message in generic_locate_patch
> 
>
> Key: HADOOP-12349
> URL: https://issues.apache.org/jira/browse/HADOOP-12349
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12349.HADOOP-12111.00.patch, 
> HADOOP-12349.HADOOP-12111.01.patch
>
>
> The following message in builtin-bugsystem.sh is duplicated with 
> jira_locate_patch and misleading.
> {code}
> yetus_debug "jira_locate_patch: not a JIRA."
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12341) patch file confuses test-patch (date format problems)

2015-09-04 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12341:

Attachment: HADOOP-12375.HADOOP-12111.02.patch

Attaching another case. This was originally attached to HADOOP-12375.
{{git apply}} works on it, but test-patch fails as follows:

{code}
/tmp/test-patch-hadoop/2043 has been created
Running in developer mode
Patch file /Users/sekikn/Desktop/HADOOP-12375.HADOOP-12111.02.patch copied to 
/private/tmp/test-patch-hadoop/2043
ERROR: Unsure how to process 
/Users/sekikn/Desktop/HADOOP-12375.HADOOP-12111.02.patch.
{code}

with bash -x:

{code}
Patch file /Users/sekikn/Desktop/HADOOP-12375.HADOOP-12111.02.patch copied to 
/private/tmp/test-patch-hadoop/2071
+ guess_patch_file /private/tmp/test-patch-hadoop/2071/patch
+ local patch=/private/tmp/test-patch-hadoop/2071/patch
+ local fileOutput
+ [[ ! -f /private/tmp/test-patch-hadoop/2071/patch ]]
+ yetus_debug 'Trying to guess is /private/tmp/test-patch-hadoop/2071/patch is 
a patch file.'
+ [[ -n '' ]]
++ file /private/tmp/test-patch-hadoop/2071/patch
+ fileOutput='/private/tmp/test-patch-hadoop/2071/patch: ASCII text'
+ [[ /private/tmp/test-patch-hadoop/2071/patch: ASCII text =~  diff  ]]
++ head -n 1 /private/tmp/test-patch-hadoop/2071/patch
++ grep -E '^(From [a-z0-9]* Mon Sep 17 00:00:00 2001)|(diff .*)|(Index: .*)$'
+ fileOutput=
+ [[ 1 == 0 ]]
+ return 1
+ [[ 1 != 0 ]]
+ yetus_error 'ERROR: Unsure how to process 
/Users/sekikn/Desktop/HADOOP-12375.HADOOP-12111.02.patch.'
+ echo 'ERROR: Unsure how to process 
/Users/sekikn/Desktop/HADOOP-12375.HADOOP-12111.02.patch.'
ERROR: Unsure how to process 
/Users/sekikn/Desktop/HADOOP-12375.HADOOP-12111.02.patch.
{code}

> patch file confuses test-patch (date format problems)
> -
>
> Key: HADOOP-12341
> URL: https://issues.apache.org/jira/browse/HADOOP-12341
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12326.002.patch, 
> HADOOP-12375.HADOOP-12111.02.patch
>
>
> This was attached to HADOOP-12326 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11897) test-patch.sh plugins should abbreviate the path

2015-09-04 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-11897:

Component/s: yetus

> test-patch.sh plugins should abbreviate the path
> 
>
> Key: HADOOP-11897
> URL: https://issues.apache.org/jira/browse/HADOOP-11897
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test, yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Priority: Trivial
>  Labels: newbie
>
> The current output of checkstyle, shellcheck, and whitespace results in very 
> long file name paths.  It might be useful to abbreviate them in some way, 
> maybe removing the entire path and leaving just the filename or last 
> directory+filename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11897) test-patch.sh plugins should abbreviate the path

2015-09-04 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-11897:

Affects Version/s: HADOOP-12111

> test-patch.sh plugins should abbreviate the path
> 
>
> Key: HADOOP-11897
> URL: https://issues.apache.org/jira/browse/HADOOP-11897
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Priority: Trivial
>  Labels: newbie
>
> The current output of checkstyle, shellcheck, and whitespace results in very 
> long file name paths.  It might be useful to abbreviate them in some way, 
> maybe removing the entire path and leaving just the filename or last 
> directory+filename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11897) test-patch.sh plugins should abbreviate the path

2015-09-04 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-11897:

Issue Type: Sub-task  (was: Test)
Parent: HADOOP-12111

> test-patch.sh plugins should abbreviate the path
> 
>
> Key: HADOOP-11897
> URL: https://issues.apache.org/jira/browse/HADOOP-11897
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Allen Wittenauer
>Priority: Trivial
>  Labels: newbie
>
> The current output of checkstyle, shellcheck, and whitespace results in very 
> long file name paths.  It might be useful to abbreviate them in some way, 
> maybe removing the entire path and leaving just the filename or last 
> directory+filename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11897) test-patch.sh plugins should abbreviate the path

2015-09-04 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731263#comment-14731263
 ] 

Kengo Seki commented on HADOOP-11897:
-

This may be fixed in the latest yetus code, but I found a strange output.

{code}
$ git checkout -b HADOOP-12111 origin/HADOOP-12111 # or simply `git checkout 
HADOOP-12111` if you've already fetched the remote branch
$ dev-support/test-patch.sh --basedir=/path/to/another/hadoop/repo 
--project=hadoop 
https://issues.apache.org/jira/secure/attachment/12731148/HADOOP-7266.001.patch
{code}

will produce the following result (I skipped all mvn running with Ctrl-C):

{code}
|| Subsystem || Report/Notes ||

| git revision | trunk / d16c4ee |
| Optional Tests | asflicense javac javadoc mvninstall unit findbugs checkstyle 
|
| uname | Darwin mobile.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 
02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | /Users/sekikn/work/hadoop/dev-support/personality/hadoop.sh |
| Default Java | 1.7.0_80 |
| mvninstall | /private/tmp/test-patch-hadoop/61296/branch-mvninstall-root.txt |
| javac | /private/tmp/test-patch-hadoop/61296/branch-javac-root.txt |
| javadoc | 
/private/tmp/test-patch-hadoop/61296/branch-javadoc-hadoop-common-project_hadoop-common.txt
 |
| javadoc | 
/private/tmp/test-patch-hadoop/61296/branch-javadoc-hadoop-tools_hadoop-streaming.txt
 |
| checkstyle | 
/private/tmp/test-patch-hadoop/61296//private/tmp/test-patch-hadoop/61296/maven-branch-checkstyle-root.txt
 |
| javac | /private/tmp/test-patch-hadoop/61296/patch-javac-root.txt |
| asflicense | /private/tmp/test-patch-hadoop/61296/patch-asflicense-root.txt |
| checkstyle | 
/private/tmp/test-patch-hadoop/61296//private/tmp/test-patch-hadoop/61296/maven-patch-checkstyle-root.txt
 |
| mvninstall | 
/private/tmp/test-patch-hadoop/61296/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| mvninstall | 
/private/tmp/test-patch-hadoop/61296/patch-mvninstall-hadoop-tools_hadoop-streaming.txt
 |
| javadoc | 
/private/tmp/test-patch-hadoop/61296/patch-javadoc-hadoop-common-project_hadoop-common.txt
 |
| javadoc | 
/private/tmp/test-patch-hadoop/61296/patch-javadoc-hadoop-tools_hadoop-streaming.txt
 |
| eclipse | 
/private/tmp/test-patch-hadoop/61296/patch-eclipse-hadoop-common-project_hadoop-common.txt
 |
| eclipse | 
/private/tmp/test-patch-hadoop/61296/patch-eclipse-hadoop-tools_hadoop-streaming.txt
 |
| findbugs | 
/private/tmp/test-patch-hadoop/61296/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 
/private/tmp/test-patch-hadoop/61296/patch-findbugs-hadoop-tools_hadoop-streaming.txt
 |
{code}

The checkstyle path is very long, because the directory path is duplicated.
As far as i investigated, shellcheck and whitespace output the correct path.

> test-patch.sh plugins should abbreviate the path
> 
>
> Key: HADOOP-11897
> URL: https://issues.apache.org/jira/browse/HADOOP-11897
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Allen Wittenauer
>Priority: Trivial
>  Labels: newbie
>
> The current output of checkstyle, shellcheck, and whitespace results in very 
> long file name paths.  It might be useful to abbreviate them in some way, 
> maybe removing the entire path and leaving just the filename or last 
> directory+filename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-04 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki reassigned HADOOP-4258:
--

Assignee: Jagadesh Kiran N

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>Assignee: Jagadesh Kiran N
>  Labels: newbie, test-patch
> Attachments: HADOOP-4258.001.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-04 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731124#comment-14731124
 ] 

Kengo Seki commented on HADOOP-4258:


Thanks [~jagadesh.kiran], that's very helpful!
When you update the patch, could you change its plugin name to something like 
"dupname"? The current name "duplicatednames" is too long as a plugin name.

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>  Labels: newbie, test-patch
> Attachments: HADOOP-4258.001.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12349) Misleading debug message in generic_locate_patch

2015-09-04 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731089#comment-14731089
 ] 

Kengo Seki commented on HADOOP-12349:
-

Thanks! But I think it's more user-friendly to leave a debug message that 
indicates the user is in generic_locate_patch and the download is failed, 
rather than simply removing it. Could you change the message to 
"generic_locate_patch: failed to download a patch" or something?

> Misleading debug message in generic_locate_patch
> 
>
> Key: HADOOP-12349
> URL: https://issues.apache.org/jira/browse/HADOOP-12349
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12349.HADOOP-12111.00.patch
>
>
> The following message in builtin-bugsystem.sh is duplicated with 
> jira_locate_patch and misleading.
> {code}
> yetus_debug "jira_locate_patch: not a JIRA."
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12355) test-patch TAP plugin should use ${SED} instead of sed

2015-09-04 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731060#comment-14731060
 ] 

Kengo Seki commented on HADOOP-12355:
-

LGTM, +1 non-binding.

> test-patch TAP plugin should use ${SED} instead of sed
> --
>
> Key: HADOOP-12355
> URL: https://issues.apache.org/jira/browse/HADOOP-12355
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12355.HADOOP-12111.00.patch
>
>
> for consistency and platform compatibility.
> {code}
>  54   if [[ -n "${filenames}" ]]; then
>  55 module_failed_tests=$(echo "${filenames}" \
>  56   | sed -e "s,${TAP_LOG_DIR},,g" -e s,^/,,g )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12375) Incomplete checking for findbugs executable

2015-09-04 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731040#comment-14731040
 ] 

Kengo Seki commented on HADOOP-12375:
-

LGTM, +1 non-binding.

BTW, my last comment

bq. your patch seems to have an unnecessary header. Would you remove the first 
3 lines?

was a mistake. 00 and 01 are not applicable because a target file is wrong as 
Allen said, not because of the header. Sorry.

> Incomplete checking for findbugs executable
> ---
>
> Key: HADOOP-12375
> URL: https://issues.apache.org/jira/browse/HADOOP-12375
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>  Labels: newbie
> Attachments: HADOOP-12375.HADOOP-12111.00.patch, 
> HADOOP-12375.HADOOP-12111.01.patch, HADOOP-12375.HADOOP-12111.02.patch
>
>
> In test-patch.d/findbugs.sh:
> {code}
>  65 function findbugs_is_installed
>  66 {
>  67   if [[ ! -e "${FINDBUGS_HOME}/bin/findbugs" ]]; then
>  68 printf "\n\n%s is not executable.\n\n" "${FINDBUGS_HOME}/bin/findbugs"
>  69 add_vote_table -1 findbugs "Findbugs is not installed."
>  70 return 1
>  71   fi
>  72   return 0
>  73 }
> {code}
> should be -x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-03 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729317#comment-14729317
 ] 

Kengo Seki commented on HADOOP-4258:


Cancelled patch because the current one is not for the HADOOP-12111 branch.

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>  Labels: newbie, test-patch
> Attachments: HADOOP-4258.001.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-03 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-4258:
---
Issue Type: Sub-task  (was: Test)
Parent: HADOOP-12111

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-4258.001.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-03 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-4258:
---
Assignee: (was: Kengo Seki)
  Status: Open  (was: Patch Available)

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>  Labels: newbie, test-patch
> Attachments: HADOOP-4258.001.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-03 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729310#comment-14729310
 ] 

Kengo Seki commented on HADOOP-4258:


This feature is not yet incorporated in Yetus, so I'll make it a subtask of 
HADOOP-12111 for now. Thanks [~ozawa] for your reminder!

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Reporter: Owen O'Malley
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-4258.001.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12375) Incomplete checking for findbugs executable

2015-09-03 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729290#comment-14729290
 ] 

Kengo Seki commented on HADOOP-12375:
-

Thanks [~jagadesh.kiran], but your patch seems to have an unnecessary header. 
Would you remove the first 3 lines?

> Incomplete checking for findbugs executable
> ---
>
> Key: HADOOP-12375
> URL: https://issues.apache.org/jira/browse/HADOOP-12375
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>  Labels: newbie
> Attachments: HADOOP-12375.HADOOP-12111.00.patch
>
>
> In test-patch.d/findbugs.sh:
> {code}
>  65 function findbugs_is_installed
>  66 {
>  67   if [[ ! -e "${FINDBUGS_HOME}/bin/findbugs" ]]; then
>  68 printf "\n\n%s is not executable.\n\n" "${FINDBUGS_HOME}/bin/findbugs"
>  69 add_vote_table -1 findbugs "Findbugs is not installed."
>  70 return 1
>  71   fi
>  72   return 0
>  73 }
> {code}
> should be -x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-09-03 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12312:

Status: Patch Available  (was: Reopened)

> Findbugs HTML report link shows 0 warnings despite errors
> -
>
> Key: HADOOP-12312
> URL: https://issues.apache.org/jira/browse/HADOOP-12312
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Varun Saxena
>Assignee: Kengo Seki
> Attachments: HADOOP-12312.HADOOP-12111.00.patch, Screen Shot 
> 2015-09-04 at 00.25.44.png, hadoop-reproduciable.git2c4208f.patch
>
>
> Refer to Hadoop QA report below :
> https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
> The report shows -1 for findbugs because there have been 7 findbugs warnings 
> introduced. But the HTML report in link is showing 0 findbugs warnings.
> I verified locally and the warnings did indeed exist.
> So there must be some problem in findbugs HTML report generation in 
> test-patch.sh
> This inconsistency between -1 for findbugs and HTML report lead to these 
> findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-09-03 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12312:

Attachment: Screen Shot 2015-09-04 at 00.25.44.png
HADOOP-12312.HADOOP-12111.00.patch

Attaching a patch. Abbreviated console output is as follows:

{code}
[sekikn@mobile hadoop]$ dev-support/test-patch.sh --basedir=../dev/hadoop 
--project=hadoop 
https://issues.apache.org/jira/secure/attachment/12751537/hadoop-reproduciable.git2c4208f.patch

(snip)

| Vote |  Subsystem |  Runtime   | Comment

|  -1  |  findbugs  |  2m 53s| hadoop-common-project/hadoop-common 
|  ||| introduced 2 new FindBugs issues.


Reason | Tests
 FindBugs  |  module:hadoop-common-project/hadoop-common 
   |  Found reliance on default encoding in 
org.apache.hadoop.metrics2.sink.KafkaSink.putMetrics(MetricsRecord):in 
org.apache.hadoop.metrics2.sink.KafkaSink.putMetrics(MetricsRecord): 
String.getBytes() At KafkaSink.java:[line 135] 
   |  
org.apache.hadoop.metrics2.sink.KafkaSink.putMetrics(MetricsRecord) invokes 
inefficient new String(String) constructor At KafkaSink.java:constructor At 
KafkaSink.java:[line 96] 


|| Subsystem || Report/Notes ||

| findbugs | v3.0.0 |
| findbugs | 
/private/tmp/test-patch-hadoop/58204/new-findbugs-hadoop-common-project_hadoop-common.html
 |
{code}

The screenshot of new-findbugs-hadoop-common-project_hadoop-common.html is also 
attached. It displays the correct information.

> Findbugs HTML report link shows 0 warnings despite errors
> -
>
> Key: HADOOP-12312
> URL: https://issues.apache.org/jira/browse/HADOOP-12312
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Varun Saxena
>Assignee: Kengo Seki
> Attachments: HADOOP-12312.HADOOP-12111.00.patch, Screen Shot 
> 2015-09-04 at 00.25.44.png, hadoop-reproduciable.git2c4208f.patch
>
>
> Refer to Hadoop QA report below :
> https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
> The report shows -1 for findbugs because there have been 7 findbugs warnings 
> introduced. But the HTML report in link is showing 0 findbugs warnings.
> I verified locally and the warnings did indeed exist.
> So there must be some problem in findbugs HTML report generation in 
> test-patch.sh
> This inconsistency between -1 for findbugs and HTML report lead to these 
> findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12375) Incomplete checking for findbugs executable

2015-09-02 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12375:
---

 Summary: Incomplete checking for findbugs executable
 Key: HADOOP-12375
 URL: https://issues.apache.org/jira/browse/HADOOP-12375
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki


In test-patch.d/findbugs.sh:

{code}
 65 function findbugs_is_installed
 66 {
 67   if [[ ! -e "${FINDBUGS_HOME}/bin/findbugs" ]]; then
 68 printf "\n\n%s is not executable.\n\n" "${FINDBUGS_HOME}/bin/findbugs"
 69 add_vote_table -1 findbugs "Findbugs is not installed."
 70 return 1
 71   fi
 72   return 0
 73 }
{code}

should be -x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-09-02 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki reassigned HADOOP-12312:
---

Assignee: Kengo Seki

> Findbugs HTML report link shows 0 warnings despite errors
> -
>
> Key: HADOOP-12312
> URL: https://issues.apache.org/jira/browse/HADOOP-12312
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Varun Saxena
>Assignee: Kengo Seki
> Attachments: hadoop-reproduciable.git2c4208f.patch
>
>
> Refer to Hadoop QA report below :
> https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
> The report shows -1 for findbugs because there have been 7 findbugs warnings 
> introduced. But the HTML report in link is showing 0 findbugs warnings.
> I verified locally and the warnings did indeed exist.
> So there must be some problem in findbugs HTML report generation in 
> test-patch.sh
> This inconsistency between -1 for findbugs and HTML report lead to these 
> findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-09-02 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14728490#comment-14728490
 ] 

Kengo Seki commented on HADOOP-12312:
-

This code:

{code}
315 #shellcheck disable=SC2016
316 new_findbugs_warnings=$("${FINDBUGS_HOME}/bin/filterBugs" -first patch \
317 "${combined_xml}" "${newbugsbase}.xml" | ${AWK} '{print $1}')
318 if [[ $? != 0 ]]; then
319   popd >/dev/null
320   module_status ${i} -1 "" "${module} cannot run filterBugs (#1) from 
findbugs"
321   ((result=result+1))
322   savestop=$(stop_clock)
323   MODULE_STATUS_TIMER[${i}]=${savestop}
324   ((i=i+1))
325   continue
326 fi
327 
328 #shellcheck disable=SC2016
329 new_findbugs_fixed_warnings=$("${FINDBUGS_HOME}/bin/filterBugs" -fixed 
patch \
330 "${combined_xml}" "${newbugsbase}.xml" | ${AWK} '{print $1}')
331 if [[ $? != 0 ]]; then
332   popd >/dev/null
333   module_status ${i} -1 "" "${module} cannot run filterBugs (#2) from 
findbugs"
334   ((result=result+1))
335   savestop=$(stop_clock)
336   MODULE_STATUS_TIMER[${i}]=${savestop}
337   ((i=i+1))
338   continue
339 fi
{code}

writes to $\{newbugsbase}.xml twice, so new warnings will be removed.

> Findbugs HTML report link shows 0 warnings despite errors
> -
>
> Key: HADOOP-12312
> URL: https://issues.apache.org/jira/browse/HADOOP-12312
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Varun Saxena
> Attachments: hadoop-reproduciable.git2c4208f.patch
>
>
> Refer to Hadoop QA report below :
> https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
> The report shows -1 for findbugs because there have been 7 findbugs warnings 
> introduced. But the HTML report in link is showing 0 findbugs warnings.
> I verified locally and the warnings did indeed exist.
> So there must be some problem in findbugs HTML report generation in 
> test-patch.sh
> This inconsistency between -1 for findbugs and HTML report lead to these 
> findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12118) Validate xml configuration files with XML Schema

2015-09-02 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14728218#comment-14728218
 ] 

Kengo Seki commented on HADOOP-12118:
-

If we proceed as my last comment, I'd like to wait the backport to branch-2 
until the refactoring on trunk is finished, because it will change the output 
message format.

> Validate xml configuration files with XML Schema
> 
>
> Key: HADOOP-12118
> URL: https://issues.apache.org/jira/browse/HADOOP-12118
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christopher Tubbs
> Attachments: HADOOP-7947.branch-2.1.patch, hadoop-configuration.xsd
>
>
> I spent an embarrassingly long time today trying to figure out why the 
> following wouldn't work.
> {code}
> 
>   fs.defaultFS
>   hdfs://localhost:9000
> 
> {code}
> I just kept getting an error about no authority for {{fs.defaultFS}}, with a 
> value of {{file:///}}, which made no sense... because I knew it was there.
> The problem was that the {{core-site.xml}} was parsed entirely without any 
> validation. This seems incorrect. The very least that could be done is a 
> simple XML Schema validation against an XSD, before parsing. That way, users 
> will get immediate failures on common typos and other problems in the xml 
> configuration files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12118) Validate xml configuration files with XML Schema

2015-08-31 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724763#comment-14724763
 ] 

Kengo Seki commented on HADOOP-12118:
-

Sorry [~gliptak] for the late response. I agree with the backport because it 
must be useful for the 2.x users. But I couldn't decide whether we should adopt 
xsd for validation or not.
One possibility is, using xsd for the basic structure validation and xpath (for 
example) for the advanced validations I mentioned above to avoid direct xml 
walking, like:

{code}
XPath xpath = XPathFactory.newInstance().newXPath();
NodeList nodes = (NodeList) 
xpath.evaluate("/configuration/property/name/text()",
new InputSource("core-site.xml"), XPathConstants.NODESET);
Set s = new HashSet();
for (int i=0; i Validate xml configuration files with XML Schema
> 
>
> Key: HADOOP-12118
> URL: https://issues.apache.org/jira/browse/HADOOP-12118
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christopher Tubbs
> Attachments: HADOOP-7947.branch-2.1.patch, hadoop-configuration.xsd
>
>
> I spent an embarrassingly long time today trying to figure out why the 
> following wouldn't work.
> {code}
> 
>   fs.defaultFS
>   hdfs://localhost:9000
> 
> {code}
> I just kept getting an error about no authority for {{fs.defaultFS}}, with a 
> value of {{file:///}}, which made no sense... because I knew it was there.
> The problem was that the {{core-site.xml}} was parsed entirely without any 
> validation. This seems incorrect. The very least that could be done is a 
> simple XML Schema validation against an XSD, before parsing. That way, users 
> will get immediate failures on common typos and other problems in the xml 
> configuration files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12336) github_jira_bridge doesn't work

2015-08-31 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12336:

Attachment: HADOOP-12336.HADOOP-12111.01.patch

Thanks! Attaching a revised patch.
I confirmed it works using HADOOP-11820:

{code}
[sekikn@mobile hadoop]$ dev-support/test-patch.sh --basedir=../dev/hadoop 
--project=hadoop HADOOP-11820

(snip)

HADOOP-11820 appears to be a Github PR. Switching Modes.
GITHUB PR #2 is being downloaded at Tue Sep  1 01:00:41 JST 2015 from
https://github.com/aw-altiscale/hadoop/pull/2
Patch from GITHUB PR #2 is being downloaded at Tue Sep  1 01:00:42 JST 2015 from
https://github.com/aw-altiscale/hadoop/pull/2.patch

(snip)

-1 overall

 _ _ __ 
|  ___|_ _(_) |_   _ _ __ ___| |
| |_ / _` | | | | | | '__/ _ \ |
|  _| (_| | | | |_| | | |  __/_|
|_|  \__,_|_|_|\__,_|_|  \___(_)



| Vote |  Subsystem |  Runtime   | Comment

|  +1  |  site  |  0m 00s| HADOOP-12111 passed 
|   0  |   @author  |  0m 00s| Skipping @author checks as test-patch.sh 
|  ||| has been patched.
|  -1  |test4tests  |  0m 00s| The patch doesn't appear to include any 
|  ||| new or modified tests. Please justify why
|  ||| no new tests are needed for this patch.
|  ||| Also please list what manual steps were
|  ||| performed to verify this patch.
|  +1  |  site  |  0m 00s| the patch passed 
|  +1  |asflicense  |  0m 26s| Patch does not generate ASF License 
|  ||| warnings.
|  -1  |shellcheck  |  0m 12s| The applied patch generated 3 new 
|  ||| shellcheck issues (total was 34, now 37).
|  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
|  ||  1m 04s| 


|| Subsystem || Report/Notes ||

| JIRA Issue | HADOOP-11820 |
| GITHUB PR | https://github.com/aw-altiscale/hadoop/pull/2 |
| git revision | HADOOP-12111 / b006c9a |
| Optional Tests | asflicense site unit shellcheck |
| uname | Darwin mobile.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 
02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | /Users/sekikn/hadoop/dev-support/personality/hadoop.sh |
| Default Java | 1.7.0_80 |
| shellcheck | v0.3.6 |
| shellcheck | /private/tmp/test-patch-hadoop/29857/diff-patch-shellcheck.txt |
| Max memory used | 46MB |
{code}

> github_jira_bridge doesn't work
> ---
>
> Key: HADOOP-12336
> URL: https://issues.apache.org/jira/browse/HADOOP-12336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12336.HADOOP-12111.00.patch, 
> HADOOP-12336.HADOOP-12111.01.patch
>
>
> The github_jira_bridge (which allows the JIRA bugsystem to switch to github 
> mode) is failing. See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12336) github_jira_bridge doesn't work

2015-08-30 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14721534#comment-14721534
 ] 

Kengo Seki commented on HADOOP-12336:
-

I tried 00.patch on HADOOP-11820 and confirmed the PR is successfully applied. 
But some nits:

{code}
  urlfromjira=$(${AWK} 'match($0,"https://github.com/.*patch";){print 
substr($0,RSTART,RLENGTH)}' "${PATCH_DIR}/jira" | tail -1)
{code}

* Using $\{GITHUB_BASE_URL} instead of a hard-coded URL seems better. Is there 
any reason?
* "https://github.com/[^ ]*patch" seems safer, as well as jira_locate_patch 
does.


> github_jira_bridge doesn't work
> ---
>
> Key: HADOOP-12336
> URL: https://issues.apache.org/jira/browse/HADOOP-12336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12336.HADOOP-12111.00.patch
>
>
> The github_jira_bridge (which allows the JIRA bugsystem to switch to github 
> mode) is failing. See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12354) github_find_jira_title in test-patch github plugin returns 0 even if jira_determine_issue failed

2015-08-25 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711589#comment-14711589
 ] 

Kengo Seki commented on HADOOP-12354:
-

I think it isn't. After applying HADOOP-12336.HADOOP-12111.00.patch, 
github_find_jira_title is still as follows:

{code}
function github_find_jira_title
{
  declare title
  declare maybe
  declare retval

  if [[ ! -f "${PATCH_DIR}/github-pull.json" ]]; then
return 1
  fi

  title=$(GREP title "${PATCH_DIR}/github-pull.json" \
| cut -f4 -d\")

  # people typically do two types:  JIRA-ISSUE: and [JIRA-ISSUE]
  # JIRA_ISSUE_RE is pretty strict so we need to chop that stuff
  # out first

  maybe=$(echo "${title}" | cut -f2 -d\[ | cut -f1 -d\])
  jira_determine_issue "${maybe}"
  retval=$?

  if [[ ${retval} == 0 ]]; then
return 0
  fi

  maybe=$(echo "${title}" | cut -f1 -d:)
  jira_determine_issue "${maybe}"
  retval=$?

  if [[ ${retval} == 0 ]]; then
return 0
  fi
}
{code}

Should it return other than 0 if both invocations of jira_determine_issue fail?

> github_find_jira_title in test-patch github plugin returns 0 even if 
> jira_determine_issue failed
> 
>
> Key: HADOOP-12354
> URL: https://issues.apache.org/jira/browse/HADOOP-12354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> So the following check for $\{GITHUB_ISSUE} seems to be skipped and 
> github_determine_issue seems to succeed almost always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12340) test-patch docker mode fails in downloading findbugs with curl

2015-08-25 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14711564#comment-14711564
 ] 

Kengo Seki commented on HADOOP-12340:
-

Oh, sorry. If it needs to be applied, please ignore.

> test-patch docker mode fails in downloading findbugs with curl
> --
>
> Key: HADOOP-12340
> URL: https://issues.apache.org/jira/browse/HADOOP-12340
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12340.HADOOP-12111.00.patch, 
> HADOOP-12340.HADOOP-12111.01.patch
>
>
> HADOOP-12129 replaced wget commands in test-patch with curl. But curl doesn't 
> follow URL redirection by default, so docker mode fails in downloading 
> findbugs for now:
> {code}
> $ dev-support/test-patch.sh --docker --project=hadoop /tmp/test.patch
> (snip)
> Step 11 : RUN mkdir -p /opt/findbugs && curl 
> https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
>   -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
> --strip-components 1 -C /opt/findbugs
>  ---> Running in 31c71013bb59
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100   397  100   3970 0168  0  0:00:02  0:00:02 --:--:--   168
> gzip: stdin: not in gzip format
> tar: Child returned status 1
> tar: Error is not recoverable: exiting now
> The command '/bin/sh -c mkdir -p /opt/findbugs && curl 
> https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
>   -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
> --strip-components 1 -C /opt/findbugs' returned a non-zero code: 2
> {code}
> Adding -L option will resolve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12340) test-patch docker mode fails in downloading findbugs with curl

2015-08-25 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12340:

Attachment: HADOOP-12340.HADOOP-12111.01.patch

-01:

* replace -L with --location for consistency with other curl commands

> test-patch docker mode fails in downloading findbugs with curl
> --
>
> Key: HADOOP-12340
> URL: https://issues.apache.org/jira/browse/HADOOP-12340
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12340.HADOOP-12111.00.patch, 
> HADOOP-12340.HADOOP-12111.01.patch
>
>
> HADOOP-12129 replaced wget commands in test-patch with curl. But curl doesn't 
> follow URL redirection by default, so docker mode fails in downloading 
> findbugs for now:
> {code}
> $ dev-support/test-patch.sh --docker --project=hadoop /tmp/test.patch
> (snip)
> Step 11 : RUN mkdir -p /opt/findbugs && curl 
> https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
>   -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
> --strip-components 1 -C /opt/findbugs
>  ---> Running in 31c71013bb59
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100   397  100   3970 0168  0  0:00:02  0:00:02 --:--:--   168
> gzip: stdin: not in gzip format
> tar: Child returned status 1
> tar: Error is not recoverable: exiting now
> The command '/bin/sh -c mkdir -p /opt/findbugs && curl 
> https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
>   -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
> --strip-components 1 -C /opt/findbugs' returned a non-zero code: 2
> {code}
> Adding -L option will resolve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12355) test-patch TAP plugin should use ${SED} instead of sed

2015-08-25 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12355:

Summary: test-patch TAP plugin should use ${SED} instead of sed  (was: 
test-patch TAP plugin should use $\{SED} instead of sed)

> test-patch TAP plugin should use ${SED} instead of sed
> --
>
> Key: HADOOP-12355
> URL: https://issues.apache.org/jira/browse/HADOOP-12355
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Priority: Trivial
>  Labels: newbie
>
> for consistency and platform compatibility.
> {code}
>  54   if [[ -n "${filenames}" ]]; then
>  55 module_failed_tests=$(echo "${filenames}" \
>  56   | sed -e "s,${TAP_LOG_DIR},,g" -e s,^/,,g )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12355) test-patch TAP plugin should use $\{SED} instead of sed

2015-08-25 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12355:
---

 Summary: test-patch TAP plugin should use $\{SED} instead of sed
 Key: HADOOP-12355
 URL: https://issues.apache.org/jira/browse/HADOOP-12355
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Priority: Trivial


for consistency and platform compatibility.

{code}
 54   if [[ -n "${filenames}" ]]; then
 55 module_failed_tests=$(echo "${filenames}" \
 56   | sed -e "s,${TAP_LOG_DIR},,g" -e s,^/,,g )
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12354) github_find_jira_title in test-patch github plugin returns 0 even if jira_determine_issue failed

2015-08-25 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12354:
---

 Summary: github_find_jira_title in test-patch github plugin 
returns 0 even if jira_determine_issue failed
 Key: HADOOP-12354
 URL: https://issues.apache.org/jira/browse/HADOOP-12354
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki


So the following check for $\{GITHUB_ISSUE} seems to be skipped and 
github_determine_issue seems to succeed almost always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12349) Misleading debug message in generic_locate_patch

2015-08-24 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12349:
---

 Summary: Misleading debug message in generic_locate_patch
 Key: HADOOP-12349
 URL: https://issues.apache.org/jira/browse/HADOOP-12349
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Priority: Minor


The following message in builtin-bugsystem.sh is duplicated with 
jira_locate_patch and misleading.

{code}
yetus_debug "jira_locate_patch: not a JIRA."
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12301) Fix some test-patch plugins to count the diff lines correctly

2015-08-22 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12301:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> Fix some test-patch plugins to count the diff lines correctly
> -
>
> Key: HADOOP-12301
> URL: https://issues.apache.org/jira/browse/HADOOP-12301
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12301.HADOOP-12111.00.patch
>
>
> 1. rubocop.sh counts only lines which have at least five fields separated by 
> a colon:
> {code}
>   calcdiffs "${PATCH_DIR}/branch-rubocop-result.txt" 
> "${PATCH_DIR}/patch-rubocop-result.txt" > 
> "${PATCH_DIR}/diff-patch-rubocop.txt"
>   diffPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/diff-patch-rubocop.txt")
>   if [[ ${diffPostpatch} -gt 0 ]] ; then
> # shellcheck disable=SC2016
> numPrepatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/branch-rubocop-result.txt")
> # shellcheck disable=SC2016
> numPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/patch-rubocop-result.txt")
> {code}
> This is because the diff result can contain multiple lines for one issue. For 
> example:
> {code}
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/25821/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:1: C: Do not introduce global variables.
> $foo
> 
> {code}
> Checking the number of fields is intended to skip the second and third lines 
> in the above diff file. But four or more colons can appear in the source code 
> itself, for example:
> {code}
> | Vote |   Subsystem |  Runtime   | Comment
> 
> |  -1  |rubocop  |  0m 02s| The applied patch generated 4 new 
> |  | || rubocop issues (total was 77, now 81).
> || Subsystem || Report/Notes ||
> 
> | rubocop | v0.32.1 |
> | rubocop | /private/tmp/test-patch-hbase/5632/diff-patch-rubocop.txt |
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/5632/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:4: C: Do not use :: for method calls.
> foo::bar::baz
>^^
> bin/draining_servers.rb:165:9: C: Do not use :: for method calls.
> foo::bar::baz
> ^^
> [sekikn@mobile hadoop]$ 
> {code}
> In this case, new rubocop issues should be 2, but counted as 4 incorrectly. 
> More reliable way to count is needed.
> 2. pylint.sh has the same problem. In addition, I removed awk's '-F:' option 
> by mistake on HADOOP-12286. It can report a wrong number for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12301) Fix some test-patch plugins to count the diff lines correctly

2015-08-22 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12301:

Attachment: HADOOP-12301.HADOOP-12111.00.patch

-00:

* Change rubocop formatter from "clang" to "emacs". The latter outputs only one 
line for one issue, so we can simply count lines as the number of issues. To be 
safe, I left the check for the number of fields, in case that the message part 
in a line contains a newline.

* Change pylint.sh to count only lines matched with {code}/^.*:.*: \[.*\] 
/{code} (e.g., "a.py:1: \[E0001(syntax-error), ] invalid syntax") to skip 
unnecessary lines. This pattern is compatible with a very early (2006) version 
of the --parseable option. 
https://bitbucket.org/logilab/pylint/commits/79b31c260ad15738a52b504b0ae29b52abf6367b?at=before-astroid#chg-reporters/text.py

I confirmed they report correct numbers of issues using the above example 
patches.

Rubocop:

{code}
| Vote |   Subsystem |  Runtime   | Comment

|  -1  |rubocop  |  0m 02s| The applied patch generated 2 new 
|  | || rubocop issues (total was 77, now 79).

|| Subsystem || Report/Notes ||

| rubocop | v0.32.1 |
| rubocop | /private/tmp/test-patch-hbase/83220/diff-patch-rubocop.txt |

[sekikn@mobile hadoop]$ cat 
/private/tmp/test-patch-hbase/83220/diff-patch-rubocop.txt
/Users/sekikn/hbase/bin/draining_servers.rb:165:4: C: Do not use `::` for 
method calls.
/Users/sekikn/hbase/bin/draining_servers.rb:165:9: C: Do not use `::` for 
method calls.
[sekikn@mobile hadoop]$ 
{code}

Pylint:

{code}
| Vote |  Subsystem |  Runtime   | Comment

|  -1  |pylint  |  0m 05s| The applied patch generated 6 new pylint 
|  ||| issues (total was 437, now 443).


|| Subsystem || Report/Notes ||

| pylint | v1.4.4 |
| pylint | /private/tmp/test-patch-hadoop/61453/diff-patch-pylint.txt |


[sekikn@mobile hadoop]$ cat 
/private/tmp/test-patch-hadoop/61453/diff-patch-pylint.txt
dev-support/releasedocmaker.py:584: [W0311(bad-indentation), ] Bad indentation. 
Found 4 spaces, expected 2
dev-support/releasedocmaker.py:584: [C0326(bad-whitespace), ] No space allowed 
after bracket
( ( ) )
^
dev-support/releasedocmaker.py:584: [C0326(bad-whitespace), ] No space allowed 
after bracket
( ( ) )
  ^
dev-support/releasedocmaker.py:584: [C0326(bad-whitespace), ] No space allowed 
before bracket
( ( ) )
^
dev-support/releasedocmaker.py:584: [C0326(bad-whitespace), ] No space allowed 
before bracket
( ( ) )
  ^
dev-support/releasedocmaker.py:584: [W0104(pointless-statement), ] Statement 
seems to have no effect
[sekikn@mobile hadoop]$ 
{code}

> Fix some test-patch plugins to count the diff lines correctly
> -
>
> Key: HADOOP-12301
> URL: https://issues.apache.org/jira/browse/HADOOP-12301
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
> Attachments: HADOOP-12301.HADOOP-12111.00.patch
>
>
> 1. rubocop.sh counts only lines which have at least five fields separated by 
> a colon:
> {code}
>   calcdiffs "${PATCH_DIR}/branch-rubocop-result.txt" 
> "${PATCH_DIR}/patch-rubocop-result.txt" > 
> "${PATCH_DIR}/diff-patch-rubocop.txt"
>   diffPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/diff-patch-rubocop.txt")
>   if [[ ${diffPostpatch} -gt 0 ]] ; then
> # shellcheck disable=SC2016
> numPrepatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/branch-rubocop-result.txt")
> # shellcheck disable=SC2016
> numPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/patch-rubocop-result.txt")
> {code}
> This is because the diff result can contain multiple lines for one issue. For 
> example:
> {code}
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/25821/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:1: C: Do not introduce global variables.
> $foo
> 
> {code}
> Checking the number of fields is intended to skip the second and third lines 
> in the above diff file. But four or more colons can appear in the source code 
> itself, for example:
> {code}
> | Vote |   Subsystem |  Runtime   | Comment
> 
> |  -1  |rubocop  |  0m 02s| The applied patch generated 4 new 
> |  | || rubocop issues (total was 77, now 81).
> || Subsystem || Report/Notes ||
> =

[jira] [Commented] (HADOOP-12301) Fix some test-patch plugins to count the diff lines correctly

2015-08-22 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14708080#comment-14708080
 ] 

Kengo Seki commented on HADOOP-12301:
-

A pylint example. In this case, new pylint issues should be 6, but counted as 
10 incorrectly.

{code}
| Vote |  Subsystem |  Runtime   | Comment

|  -1  |pylint  |  0m 05s| The applied patch generated 10 new 
|  ||| pylint issues (total was 437, now 447).


|| Subsystem || Report/Notes ||

| pylint | v1.4.4 |
| pylint | /private/tmp/test-patch-hadoop/40919/diff-patch-pylint.txt |


[sekikn@mobile hadoop]$ cat 
/private/tmp/test-patch-hadoop/40919/diff-patch-pylint.txt
dev-support/releasedocmaker.py:584: [W0311(bad-indentation), ] Bad indentation. 
Found 4 spaces, expected 2
dev-support/releasedocmaker.py:584: [C0326(bad-whitespace), ] No space allowed 
after bracket
( ( ) )
^
dev-support/releasedocmaker.py:584: [C0326(bad-whitespace), ] No space allowed 
after bracket
( ( ) )
  ^
dev-support/releasedocmaker.py:584: [C0326(bad-whitespace), ] No space allowed 
before bracket
( ( ) )
^
dev-support/releasedocmaker.py:584: [C0326(bad-whitespace), ] No space allowed 
before bracket
( ( ) )
  ^
dev-support/releasedocmaker.py:584: [W0104(pointless-statement), ] Statement 
seems to have no effect
[sekikn@mobile hadoop]$ 
{code}


> Fix some test-patch plugins to count the diff lines correctly
> -
>
> Key: HADOOP-12301
> URL: https://issues.apache.org/jira/browse/HADOOP-12301
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> 1. rubocop.sh counts only lines which have at least five fields separated by 
> a colon:
> {code}
>   calcdiffs "${PATCH_DIR}/branch-rubocop-result.txt" 
> "${PATCH_DIR}/patch-rubocop-result.txt" > 
> "${PATCH_DIR}/diff-patch-rubocop.txt"
>   diffPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/diff-patch-rubocop.txt")
>   if [[ ${diffPostpatch} -gt 0 ]] ; then
> # shellcheck disable=SC2016
> numPrepatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/branch-rubocop-result.txt")
> # shellcheck disable=SC2016
> numPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/patch-rubocop-result.txt")
> {code}
> This is because the diff result can contain multiple lines for one issue. For 
> example:
> {code}
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/25821/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:1: C: Do not introduce global variables.
> $foo
> 
> {code}
> Checking the number of fields is intended to skip the second and third lines 
> in the above diff file. But four or more colons can appear in the source code 
> itself, for example:
> {code}
> | Vote |   Subsystem |  Runtime   | Comment
> 
> |  -1  |rubocop  |  0m 02s| The applied patch generated 4 new 
> |  | || rubocop issues (total was 77, now 81).
> || Subsystem || Report/Notes ||
> 
> | rubocop | v0.32.1 |
> | rubocop | /private/tmp/test-patch-hbase/5632/diff-patch-rubocop.txt |
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/5632/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:4: C: Do not use :: for method calls.
> foo::bar::baz
>^^
> bin/draining_servers.rb:165:9: C: Do not use :: for method calls.
> foo::bar::baz
> ^^
> [sekikn@mobile hadoop]$ 
> {code}
> In this case, new rubocop issues should be 2, but counted as 4 incorrectly. 
> More reliable way to count is needed.
> 2. pylint.sh has the same problem. In addition, I removed awk's '-F:' option 
> by mistake on HADOOP-12286. It can report a wrong number for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12301) Fix some test-patch plugins to count the diff lines correctly

2015-08-22 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14708070#comment-14708070
 ] 

Kengo Seki commented on HADOOP-12301:
-

I misunderstood ruby-lint output. It outputs only one line for one issue (I ran 
ruby-lint against all of the .rb files under HBase and confirmed it) and the 
current implementation is reasonable. I edited JIRA description so as to 
reflect reality.

> Fix some test-patch plugins to count the diff lines correctly
> -
>
> Key: HADOOP-12301
> URL: https://issues.apache.org/jira/browse/HADOOP-12301
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> 1. rubocop.sh counts only lines which have at least five fields separated by 
> a colon:
> {code}
>   calcdiffs "${PATCH_DIR}/branch-rubocop-result.txt" 
> "${PATCH_DIR}/patch-rubocop-result.txt" > 
> "${PATCH_DIR}/diff-patch-rubocop.txt"
>   diffPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/diff-patch-rubocop.txt")
>   if [[ ${diffPostpatch} -gt 0 ]] ; then
> # shellcheck disable=SC2016
> numPrepatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/branch-rubocop-result.txt")
> # shellcheck disable=SC2016
> numPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/patch-rubocop-result.txt")
> {code}
> This is because the diff result can contain multiple lines for one issue. For 
> example:
> {code}
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/25821/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:1: C: Do not introduce global variables.
> $foo
> 
> {code}
> Checking the number of fields is intended to skip the second and third lines 
> in the above diff file. But four or more colons can appear in the source code 
> itself, for example:
> {code}
> | Vote |   Subsystem |  Runtime   | Comment
> 
> |  -1  |rubocop  |  0m 02s| The applied patch generated 4 new 
> |  | || rubocop issues (total was 77, now 81).
> || Subsystem || Report/Notes ||
> 
> | rubocop | v0.32.1 |
> | rubocop | /private/tmp/test-patch-hbase/5632/diff-patch-rubocop.txt |
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/5632/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:4: C: Do not use :: for method calls.
> foo::bar::baz
>^^
> bin/draining_servers.rb:165:9: C: Do not use :: for method calls.
> foo::bar::baz
> ^^
> [sekikn@mobile hadoop]$ 
> {code}
> In this case, new rubocop issues should be 2, but counted as 4 incorrectly. 
> More reliable way to count is needed.
> 2. pylint.sh has the same problem. In addition, I removed awk's '-F:' option 
> by mistake on HADOOP-12286. It can report a wrong number for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12301) Fix some test-patch plugins to count the diff lines correctly

2015-08-22 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12301:

Description: 
1. rubocop.sh counts only lines which have at least five fields separated by a 
colon:

{code}
  calcdiffs "${PATCH_DIR}/branch-rubocop-result.txt" 
"${PATCH_DIR}/patch-rubocop-result.txt" > "${PATCH_DIR}/diff-patch-rubocop.txt"
  diffPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4https://github.com/bbatsov/rubocop#emacs-style-formatter


> Fix some test-patch plugins to count the diff lines correctly
> -
>
> Key: HADOOP-12301
> URL: https://issues.apache.org/jira/browse/HADOOP-12301
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> 1. rubocop.sh counts only lines which have at least five fields separated by 
> a colon:
> {code}
>   calcdiffs "${PATCH_DIR}/branch-rubocop-result.txt" 
> "${PATCH_DIR}/patch-rubocop-result.txt" > 
> "${PATCH_DIR}/diff-patch-rubocop.txt"
>   diffPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/diff-patch-rubocop.txt")
>   if [[ ${diffPostpatch} -gt 0 ]] ; then
> # shellcheck disable=SC2016
> numPrepatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/branch-rubocop-result.txt")
> # shellcheck disable=SC2016
> numPostpatch=$(${AWK} -F: 'BEGIN {sum=0} 4 "${PATCH_DIR}/patch-rubocop-result.txt")
> {code}
> This is because the diff result can contain multiple lines for one issue. For 
> example:
> {code}
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/25821/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:1: C: Do not introduce global variables.
> $foo
> 
> {code}
> Checking the number of fields is intended to skip the second and third lines 
> in the above diff file. But four or more colons can appear in the source code 
> itself, for example:
> {code}
> | Vote |   Subsystem |  Runtime   | Comment
> 
> |  -1  |rubocop  |  0m 02s| The applied patch generated 4 new 
> |  | || rubocop issues (total was 77, now 81).
> || Subsystem || Report/Notes ||
> 
> | rubocop | v0.32.1 |
> | rubocop | /private/tmp/test-patch-hbase/5632/diff-patch-rubocop.txt |
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hbase/5632/diff-patch-rubocop.txt
> bin/draining_servers.rb:165:4: C: Do not use :: for method calls.
> foo::bar::baz
>^^
> bin/draining_servers.rb:165:9: C: Do not use :: for method calls.
> foo::bar::baz
> ^^
> [sekikn@mobile hadoop]$ 
> {code}
> In this case, new rubocop issues should be 2, but counted as 4 incorrectly. 
> More reliable way to count is needed.
> 2. pylint.sh has the same problem. In addition, I removed awk's '-F:' option 
> by mistake on HADOOP-12286. It can report a wrong number for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12340) test-patch docker mode fails in downloading findbugs with curl

2015-08-21 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12340:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> test-patch docker mode fails in downloading findbugs with curl
> --
>
> Key: HADOOP-12340
> URL: https://issues.apache.org/jira/browse/HADOOP-12340
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12340.HADOOP-12111.00.patch
>
>
> HADOOP-12129 replaced wget commands in test-patch with curl. But curl doesn't 
> follow URL redirection by default, so docker mode fails in downloading 
> findbugs for now:
> {code}
> $ dev-support/test-patch.sh --docker --project=hadoop /tmp/test.patch
> (snip)
> Step 11 : RUN mkdir -p /opt/findbugs && curl 
> https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
>   -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
> --strip-components 1 -C /opt/findbugs
>  ---> Running in 31c71013bb59
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100   397  100   3970 0168  0  0:00:02  0:00:02 --:--:--   168
> gzip: stdin: not in gzip format
> tar: Child returned status 1
> tar: Error is not recoverable: exiting now
> The command '/bin/sh -c mkdir -p /opt/findbugs && curl 
> https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
>   -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
> --strip-components 1 -C /opt/findbugs' returned a non-zero code: 2
> {code}
> Adding -L option will resolve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12340) test-patch docker mode fails in downloading findbugs with curl

2015-08-21 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12340:

Attachment: HADOOP-12340.HADOOP-12111.00.patch

Attaching a patch. With this patch, docker mode succeeds as follows:

{code}
[sekikn@mobile hadoop]$ dev-support/test-patch.sh --basedir=../dev/hadoop 
--docker --patch-dir=foo --project=hadoop /tmp/test.patch

(snip)

Step 11 : RUN mkdir -p /opt/findbugs && curl -L 
https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
  -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
--strip-components 1 -C /opt/findbugs
 ---> Using cache
 ---> 3b1253eaf7b3

(snip)

Successfully built 7417b6ced5f2

(snip)

| Vote |  Subsystem |  Runtime   | Comment

|   0  |reexec  |  0m 6s | docker + precommit patch detected. 
|  +1  |   @author  |  0m 0s | The patch does not contain any @author 
|  ||| tags.
|  +1  |asflicense  |  7m 8s | Patch does not generate ASF License 
|  ||| warnings.
|  +1  |whitespace  |  0m 0s | Patch has no whitespace issues. 
|  ||  8m 47s| 


|| Subsystem || Report/Notes ||

| Docker | Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-86a1025 |
| git revision | HADOOP-12111 / 2c4208f |
| Optional Tests | asflicense |
| uname | Linux 9721e2853d64 4.0.7-boot2docker #1 SMP Wed Jul 15 00:01:41 UTC 
2015 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/hadoop.sh |
| Default Java | 1.7.0_79 |
| Max memory used | 30MB |
{code}

> test-patch docker mode fails in downloading findbugs with curl
> --
>
> Key: HADOOP-12340
> URL: https://issues.apache.org/jira/browse/HADOOP-12340
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
> Attachments: HADOOP-12340.HADOOP-12111.00.patch
>
>
> HADOOP-12129 replaced wget commands in test-patch with curl. But curl doesn't 
> follow URL redirection by default, so docker mode fails in downloading 
> findbugs for now:
> {code}
> $ dev-support/test-patch.sh --docker --project=hadoop /tmp/test.patch
> (snip)
> Step 11 : RUN mkdir -p /opt/findbugs && curl 
> https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
>   -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
> --strip-components 1 -C /opt/findbugs
>  ---> Running in 31c71013bb59
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100   397  100   3970 0168  0  0:00:02  0:00:02 --:--:--   168
> gzip: stdin: not in gzip format
> tar: Child returned status 1
> tar: Error is not recoverable: exiting now
> The command '/bin/sh -c mkdir -p /opt/findbugs && curl 
> https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
>   -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
> --strip-components 1 -C /opt/findbugs' returned a non-zero code: 2
> {code}
> Adding -L option will resolve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12233) if CHANGED_FILES is corrupt, find_changed_modules never returns

2015-08-21 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12233:

Attachment: HADOOP-12233.HADOOP-12111.01.patch

bq. Would something simple like verifying that we're still inside $\{BASEDIR} 
be sufficient?

It sounds better because it's not unnatural and we can remove magic numbers 
from the source.
Attaching a revised patch. If a changed file is not in $\{BASEDIR}, test-patch 
will fail as follows:

{code}
[sekikn@mobile hadoop]$ cat /tmp/test.patch 
diff --git /a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 3c398be..05aa347 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -581,3 +581,4 @@ def main():
 
 if __name__ == "__main__":
 main()
+
[sekikn@mobile hadoop]$ dev-support/test-patch.sh --basedir=../dev/hadoop 
--project=hadoop /tmp/test.patch --debug

(snip)

[Sat Aug 22 13:55:42 JST 2015 DEBUG]: Find pom.xml dir for: /a/dev-support
dev-support/test-patch.sh: line 1123: cd: /a: No such file or directory
[Sat Aug 22 13:55:42 JST 2015 DEBUG]: ERROR: /a/dev-support is not in 
/Users/sekikn/dev/hadoop.
ERROR: pom.xml is not found. Make sure the target is a maven-based project.


-1 overall
{code}

Valid patches still pass:

{code}
[sekikn@mobile hadoop]$ cat /tmp/test.patch 
diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 3c398be..05aa347 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -581,3 +581,4 @@ def main():
 
 if __name__ == "__main__":
 main()
+
[sekikn@mobile hadoop]$ dev-support/test-patch.sh --basedir=../dev/hadoop 
--project=hadoop /tmp/test.patch

(snip)

+1 overall
{code}

> if CHANGED_FILES is corrupt, find_changed_modules never returns
> ---
>
> Key: HADOOP-12233
> URL: https://issues.apache.org/jira/browse/HADOOP-12233
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Attachments: HADOOP-12233.HADOOP-12111.00.patch, 
> HADOOP-12233.HADOOP-12111.01.patch
>
>
> In building some unit tests, did a negative tests and hit this condition.  We 
> should put a limit on how many times we loop in the find_x_dirs code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12061) Incorrect command in 2.7.0 single cluster setup document

2015-08-19 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12061:

Attachment: HADOOP-12061.branch-2.002.patch

Sure. Would you try this one?

> Incorrect command in 2.7.0 single cluster setup document
> 
>
> Key: HADOOP-12061
> URL: https://issues.apache.org/jira/browse/HADOOP-12061
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12061.001.patch, HADOOP-12061.002.patch, 
> HADOOP-12061.branch-2.002.patch
>
>
> There seems to be a trivial fault in the single node cluster setup document 
> on branch-2.
> "Setup passphraseless ssh" says as follows:
> {code}
>   $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
>   $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>   $ export HADOOP\_PREFIX=/usr/local/hadoop
> {code}
> But it should be:
> {code}
>   $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
>   $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>   $ chmod 0700 ~/.ssh/authorized_keys
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12340) test-patch docker mode fails in downloading findbugs with curl

2015-08-18 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12340:
---

 Summary: test-patch docker mode fails in downloading findbugs with 
curl
 Key: HADOOP-12340
 URL: https://issues.apache.org/jira/browse/HADOOP-12340
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki


HADOOP-12129 replaced wget commands in test-patch with curl. But curl doesn't 
follow URL redirection by default, so docker mode fails in downloading findbugs 
for now:

{code}
$ dev-support/test-patch.sh --docker --project=hadoop /tmp/test.patch

(snip)

Step 11 : RUN mkdir -p /opt/findbugs && curl 
https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
  -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
--strip-components 1 -C /opt/findbugs
 ---> Running in 31c71013bb59
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   397  100   3970 0168  0  0:00:02  0:00:02 --:--:--   168

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command '/bin/sh -c mkdir -p /opt/findbugs && curl 
https://sourceforge.net/projects/findbugs/files/findbugs/3.0.1/findbugs-noUpdateChecks-3.0.1.tar.gz/download
  -o /opt/findbugs.tar.gz && tar xzf /opt/findbugs.tar.gz 
--strip-components 1 -C /opt/findbugs' returned a non-zero code: 2
{code}

Adding -L option will resolve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12339) Wrong help message about --curl-cmd option

2015-08-18 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12339:
---

 Summary: Wrong help message about --curl-cmd option
 Key: HADOOP-12339
 URL: https://issues.apache.org/jira/browse/HADOOP-12339
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Priority: Minor


{code}
--curl-cmd=   The 'wget' command to use (default 'curl')
{code}

:)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12303) test-patch pylint plugin fails silently and votes +1 incorrectly

2015-08-16 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12303:

Attachment: HADOOP-12303.HADOOP-12111.01.patch

-01:

* address shellcheck issues

> test-patch pylint plugin fails silently and votes +1 incorrectly
> 
>
> Key: HADOOP-12303
> URL: https://issues.apache.org/jira/browse/HADOOP-12303
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12303.HADOOP-12111.00.patch, 
> HADOOP-12303.HADOOP-12111.01.patch
>
>
> This patch
> {code}
> [sekikn@mobile hadoop]$ cat /tmp/test.patch 
> diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
> index 37bd58a..7cd6dd3 100755
> --- a/dev-support/releasedocmaker.py
> +++ b/dev-support/releasedocmaker.py
> @@ -580,4 +580,4 @@ def main():
>  sys.exit(1)
>  
>  if __name__ == "__main__":
> -main()
> +main( )
> {code}
> is supposed to cause the following pylint errors.
> {code}
> C:583, 0: No space allowed after bracket
> main( )
> ^ (bad-whitespace)
> C:583, 0: No space allowed before bracket
> main( )
>   ^ (bad-whitespace)
> {code}
> But the system locale is set as follows, pylint check is passed, and there is 
> no pylint output.
> {code}
> [sekikn@mobile hadoop]$ locale
> LANG=
> LC_COLLATE="C"
> LC_CTYPE="UTF-8"
> LC_MESSAGES="C"
> LC_MONETARY="C"
> LC_NUMERIC="C"
> LC_TIME="C"
> LC_ALL=
> [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
> --basedir=/Users/sekikn/dev/hadoop --project=hadoop /tmp/test.patch 
> (snip)
> | Vote |  Subsystem |  Runtime   | Comment
> 
> |  +1  |   @author  |  0m 00s| The patch does not contain any @author 
> |  ||| tags.
> |  +1  |asflicense  |  0m 21s| Patch does not generate ASF License 
> |  ||| warnings.
> |  +1  |pylint  |  0m 01s| There were no new pylint issues. 
> |  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
> |  ||  0m 24s| 
> (snip)
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hadoop/8656/branch-pylint-result.txt 
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hadoop/8656/patch-pylint-result.txt 
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hadoop/8656/diff-patch-pylint.txt 
> {code}
> Removing '2>/dev/null' from pylint.sh reveals the root cause. Setting LC_ALL 
> or LC_CTYPE such like 'en_US.UTF-8' solves this problem.
> {code}
> 
> 
>   pylint plugin: prepatch
> 
> 
> 
> Running pylint against modified python scripts.
> No config file found, using default configuration
> Traceback (most recent call last):
>   File "/usr/local/bin/pylint", line 11, in 
> sys.exit(run_pylint())
>   File "/Library/Python/2.7/site-packages/pylint/__init__.py", line 23, in 
> run_pylint
> Run(sys.argv[1:])
>   File "/Library/Python/2.7/site-packages/pylint/lint.py", line 1332, in 
> __init__
> linter.check(args)
>   File "/Library/Python/2.7/site-packages/pylint/lint.py", line 747, in check
> self._do_check(files_or_modules)
>   File "/Library/Python/2.7/site-packages/pylint/lint.py", line 869, in 
> _do_check
> self.check_astroid_module(ast_node, walker, rawcheckers, tokencheckers)
>   File "/Library/Python/2.7/site-packages/pylint/lint.py", line 944, in 
> check_astroid_module
> checker.process_tokens(tokens)
>   File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 
> 743, in process_tokens
> self.check_indent_level(token, indents[-1]+1, line_num)
>   File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 
> 963, in check_indent_level
> expected * unit_size))
>   File "/Library/Python/2.7/site-packages/pylint/checkers/__init__.py", line 
> 101, in add_message
> self.linter.add_message(msg_id, line, node, args, confidence)
>   File "/Library/Python/2.7/site-packages/pylint/utils.py", line 410, in 
> add_message
> (abspath, path, module, obj, line or 1, col_offset or 0), msg, 
> confidence))
>   File "/Library/Python/2.7/site-packages/pylint/reporters/text.py", line 61, 
> in handle_message
> self.write_message(msg)
>   File "/Library/Python/2.7/

[jira] [Updated] (HADOOP-12303) test-patch pylint plugin fails silently and votes +1 incorrectly

2015-08-16 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12303:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> test-patch pylint plugin fails silently and votes +1 incorrectly
> 
>
> Key: HADOOP-12303
> URL: https://issues.apache.org/jira/browse/HADOOP-12303
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12303.HADOOP-12111.00.patch
>
>
> This patch
> {code}
> [sekikn@mobile hadoop]$ cat /tmp/test.patch 
> diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
> index 37bd58a..7cd6dd3 100755
> --- a/dev-support/releasedocmaker.py
> +++ b/dev-support/releasedocmaker.py
> @@ -580,4 +580,4 @@ def main():
>  sys.exit(1)
>  
>  if __name__ == "__main__":
> -main()
> +main( )
> {code}
> is supposed to cause the following pylint errors.
> {code}
> C:583, 0: No space allowed after bracket
> main( )
> ^ (bad-whitespace)
> C:583, 0: No space allowed before bracket
> main( )
>   ^ (bad-whitespace)
> {code}
> But the system locale is set as follows, pylint check is passed, and there is 
> no pylint output.
> {code}
> [sekikn@mobile hadoop]$ locale
> LANG=
> LC_COLLATE="C"
> LC_CTYPE="UTF-8"
> LC_MESSAGES="C"
> LC_MONETARY="C"
> LC_NUMERIC="C"
> LC_TIME="C"
> LC_ALL=
> [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
> --basedir=/Users/sekikn/dev/hadoop --project=hadoop /tmp/test.patch 
> (snip)
> | Vote |  Subsystem |  Runtime   | Comment
> 
> |  +1  |   @author  |  0m 00s| The patch does not contain any @author 
> |  ||| tags.
> |  +1  |asflicense  |  0m 21s| Patch does not generate ASF License 
> |  ||| warnings.
> |  +1  |pylint  |  0m 01s| There were no new pylint issues. 
> |  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
> |  ||  0m 24s| 
> (snip)
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hadoop/8656/branch-pylint-result.txt 
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hadoop/8656/patch-pylint-result.txt 
> [sekikn@mobile hadoop]$ cat 
> /private/tmp/test-patch-hadoop/8656/diff-patch-pylint.txt 
> {code}
> Removing '2>/dev/null' from pylint.sh reveals the root cause. Setting LC_ALL 
> or LC_CTYPE such like 'en_US.UTF-8' solves this problem.
> {code}
> 
> 
>   pylint plugin: prepatch
> 
> 
> 
> Running pylint against modified python scripts.
> No config file found, using default configuration
> Traceback (most recent call last):
>   File "/usr/local/bin/pylint", line 11, in 
> sys.exit(run_pylint())
>   File "/Library/Python/2.7/site-packages/pylint/__init__.py", line 23, in 
> run_pylint
> Run(sys.argv[1:])
>   File "/Library/Python/2.7/site-packages/pylint/lint.py", line 1332, in 
> __init__
> linter.check(args)
>   File "/Library/Python/2.7/site-packages/pylint/lint.py", line 747, in check
> self._do_check(files_or_modules)
>   File "/Library/Python/2.7/site-packages/pylint/lint.py", line 869, in 
> _do_check
> self.check_astroid_module(ast_node, walker, rawcheckers, tokencheckers)
>   File "/Library/Python/2.7/site-packages/pylint/lint.py", line 944, in 
> check_astroid_module
> checker.process_tokens(tokens)
>   File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 
> 743, in process_tokens
> self.check_indent_level(token, indents[-1]+1, line_num)
>   File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 
> 963, in check_indent_level
> expected * unit_size))
>   File "/Library/Python/2.7/site-packages/pylint/checkers/__init__.py", line 
> 101, in add_message
> self.linter.add_message(msg_id, line, node, args, confidence)
>   File "/Library/Python/2.7/site-packages/pylint/utils.py", line 410, in 
> add_message
> (abspath, path, module, obj, line or 1, col_offset or 0), msg, 
> confidence))
>   File "/Library/Python/2.7/site-packages/pylint/reporters/text.py", line 61, 
> in handle_message
> self.write_message(msg)
>   File "/Library/Python/2.7/site-packages/pylint/reporters/text.py", line 51, 
> in w

[jira] [Updated] (HADOOP-12303) test-patch pylint plugin fails silently and votes +1 incorrectly

2015-08-16 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12303:

Attachment: HADOOP-12303.HADOOP-12111.00.patch

Attaching a patch. I confirmed the pylint plugin detects the above case 
correctly and passes some simple regression tests.

{code}
[sekikn@mobile hadoop]$ cat /tmp/test.patch
diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 3c398be..6c862ee 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -580,4 +580,4 @@ def main():
 sys.exit(1)
 
 if __name__ == "__main__":
-main()
+main( )
[sekikn@mobile hadoop]$ locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
[sekikn@mobile hadoop]$ dev-support/test-patch.sh 
--basedir=/Users/sekikn/dev/hadoop --project=hadoop /tmp/test.patch

(snip)

| Vote |  Subsystem |  Runtime   | Comment

|  +1  |   @author  |  0m 00s| The patch does not contain any @author 
|  ||| tags.
|  +1  |asflicense  |  0m 31s| Patch does not generate ASF License 
|  ||| warnings.
|  -1  |pylint  |  0m 00s| Something bad seems to have happened in 
|  ||| running pylint. Please check pylint
|  ||| stderr files.
|  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
|  ||  0m 32s| 


|| Subsystem || Report/Notes ||

| git revision | HADOOP-12111 / 565d9bf |
| Optional Tests | asflicense pylint |
| uname | Darwin mobile.local 14.4.0 Darwin Kernel Version 14.4.0: Thu May 28 
11:35:04 PDT 2015; root:xnu-2782.30.5~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | /Users/sekikn/hadoop/dev-support/personality/hadoop.sh |
| Default Java | 1.7.0_80 |
| pylint | prepatch stderr: 
/private/tmp/test-patch-hadoop/48440/pylint.48440.19203 |
| pylint | postpatch stderr: 
/private/tmp/test-patch-hadoop/48440/pylint.48440.3046 |
| Max memory used | 46MB |




  Finished build.




[sekikn@mobile hadoop]$ cat 
/private/tmp/test-patch-hadoop/48440/pylint.48440.3046
No config file found, using default configuration
Traceback (most recent call last):
  File "/usr/local/bin/pylint", line 11, in 
sys.exit(run_pylint())
  File "/Library/Python/2.7/site-packages/pylint/__init__.py", line 23, in 
run_pylint
Run(sys.argv[1:])
  File "/Library/Python/2.7/site-packages/pylint/lint.py", line 1332, in 
__init__
linter.check(args)
  File "/Library/Python/2.7/site-packages/pylint/lint.py", line 747, in check
self._do_check(files_or_modules)
  File "/Library/Python/2.7/site-packages/pylint/lint.py", line 869, in 
_do_check
self.check_astroid_module(ast_node, walker, rawcheckers, tokencheckers)
  File "/Library/Python/2.7/site-packages/pylint/lint.py", line 944, in 
check_astroid_module
checker.process_tokens(tokens)
  File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 743, 
in process_tokens
self.check_indent_level(token, indents[-1]+1, line_num)
  File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 963, 
in check_indent_level
expected * unit_size))
  File "/Library/Python/2.7/site-packages/pylint/checkers/__init__.py", line 
101, in add_message
self.linter.add_message(msg_id, line, node, args, confidence)
  File "/Library/Python/2.7/site-packages/pylint/utils.py", line 410, in 
add_message
(abspath, path, module, obj, line or 1, col_offset or 0), msg, confidence))
  File "/Library/Python/2.7/site-packages/pylint/reporters/text.py", line 61, 
in handle_message
self.write_message(msg)
  File "/Library/Python/2.7/site-packages/pylint/reporters/text.py", line 51, 
in write_message
self.writeln(msg.format(self._template))
  File "/Library/Python/2.7/site-packages/pylint/reporters/__init__.py", line 
94, in writeln
print(self.encode(string), file=self.out)
  File "/Library/Python/2.7/site-packages/pylint/reporters/__init__.py", line 
84, in encode
locale.getdefaultlocale()[1] or
  File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py",
 line 511, in getdefaultlocale
return _parse_localename(localename)
  File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py",
 line 443, in _parse_localename
raise 

[jira] [Updated] (HADOOP-12233) if CHANGED_FILES is corrupt, find_changed_modules never returns

2015-08-16 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12233:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> if CHANGED_FILES is corrupt, find_changed_modules never returns
> ---
>
> Key: HADOOP-12233
> URL: https://issues.apache.org/jira/browse/HADOOP-12233
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
> Attachments: HADOOP-12233.HADOOP-12111.00.patch
>
>
> In building some unit tests, did a negative tests and hit this condition.  We 
> should put a limit on how many times we loop in the find_x_dirs code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12233) if CHANGED_FILES is corrupt, find_changed_modules never returns

2015-08-16 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12233:

Attachment: HADOOP-12233.HADOOP-12111.00.patch

-00:

* put a limit on the loops in find_buildfile_dir and module_skipdir in the same 
way as find_changed_modules

I confirmed that both of them stop against some artificial patch (without going 
into detail so that a malicious patch starving jenkins is not submitted easily) 
with the following message:

{code}
ERROR: pom.xml is not found. Make sure the target is a maven-based project.
{code}

> if CHANGED_FILES is corrupt, find_changed_modules never returns
> ---
>
> Key: HADOOP-12233
> URL: https://issues.apache.org/jira/browse/HADOOP-12233
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12233.HADOOP-12111.00.patch
>
>
> In building some unit tests, did a negative tests and hit this condition.  We 
> should put a limit on how many times we loop in the find_x_dirs code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12314) check_unittests in test-patch.sh can return a wrong status

2015-08-13 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12314:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> check_unittests in test-patch.sh can return a wrong status
> --
>
> Key: HADOOP-12314
> URL: https://issues.apache.org/jira/browse/HADOOP-12314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12314.HADOOP-12111.00.patch
>
>
> Follow-up from HADOOP-12247. check_unittests returns the value of  
> $\{result}, but the status of *_process_tests is added to $\{results}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   >